An Investigation of the Sample Performance of Two Nonnormality Corrections for RMSEA
ERIC Educational Resources Information Center
Brosseau-Liard, Patricia E.; Savalei, Victoria; Li, Libo
2012-01-01
The root mean square error of approximation (RMSEA) is a popular fit index in structural equation modeling (SEM). Typically, RMSEA is computed using the normal theory maximum likelihood (ML) fit function. Under nonnormality, the uncorrected sample estimate of the ML RMSEA tends to be inflated. Two robust corrections to the sample ML RMSEA have…
ERIC Educational Resources Information Center
Huberty, Carl J.
An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…
Statistical Significance Testing.
ERIC Educational Resources Information Center
McLean, James E., Ed.; Kaufman, Alan S., Ed.
1998-01-01
The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…
NASA Astrophysics Data System (ADS)
de Gouvêa, André; Murayama, Hitoshi
2003-10-01
“Anarchy” is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on |Ue3|2, the remaining unknown “angle” of the leptonic mixing matrix.
Fit Indices Versus Test Statistics
ERIC Educational Resources Information Center
Yuan, Ke-Hai
2005-01-01
Model evaluation is one of the most important aspects of structural equation modeling (SEM). Many model fit indices have been developed. It is not an exaggeration to say that nearly every publication using the SEM methodology has reported at least one fit index. Most fit indices are defined through test statistics. Studies and interpretation of…
Statistics and Hypothesis Testing in Biology.
ERIC Educational Resources Information Center
Maret, Timothy J.; Ziemba, Robert E.
1997-01-01
Suggests that early in their education students be taught to use basic statistical tests as rigorous methods of comparing experimental results with scientific hypotheses. Stresses that students learn how to use statistical tests in hypothesis-testing by applying them in actual hypothesis-testing situations. To illustrate, uses questions such as…
A STATISTICAL EVALUATION OF OHMSETT TESTING
This program was initiated to provide a statistical evaluation of performance data generated at the USEPA's Oil and Hazardous Materials Simulated Environmental Test Tank (OHMSETT). The objective was to investigate the value of replicate testing in developing efficient test progra...
Quantum Statistical Testing of a QRNG Algorithm
Humble, Travis S; Pooser, Raphael C; Britt, Keith A
2013-01-01
We present the algorithmic design of a quantum random number generator, the subsequent synthesis of a physical design and its verification using quantum statistical testing. We also describe how quantum statistical testing can be used to diagnose channel noise in QKD protocols.
2009 GED Testing Program Statistical Report
ERIC Educational Resources Information Center
GED Testing Service, 2010
2010-01-01
The "2009 GED[R] Testing Program Statistical Report" is the 52nd annual report in the program's 68-year history of providing a second opportunity for adults without a high school credential to earn their jurisdiction's GED credential. The report provides candidate demographic and GED Test performance statistics as well as historical information on…
Applications of Statistical Tests in Hand Surgery
Song, Jae W.; Haas, Ann; Chung, Kevin C.
2015-01-01
During the nineteenth century, with the emergence of public health as a goal to improve hygiene and conditions of the poor, statistics established itself as a distinct scientific field important for critically interpreting studies of public health concerns. During the twentieth century, statistics began to evolve mathematically and methodologically with hypothesis testing and experimental design. Today, the design of medical experiments centers around clinical trials and observational studies, and with the use of statistics, the collected data are summarized, weighed, and presented to direct both physicians and the public towards Evidence-Based Medicine. Having a basic understanding of statistics is mandatory in evaluating the validity of published literature and applying it to patient care. In this review, we aim to apply a practical approach in discussing basic statistical tests by providing a guide to choosing the correct statistical test along with examples relevant to hand surgery research. PMID:19969193
Teaching Statistics in Language Testing Courses
ERIC Educational Resources Information Center
Brown, James Dean
2013-01-01
The purpose of this article is to examine the literature on teaching statistics for useful ideas that teachers of language testing courses can draw on and incorporate into their teaching toolkits as they see fit. To those ends, the article addresses eight questions: What is known generally about teaching statistics? Why are students so anxious…
Statistics Test Questions: Content and Trends
ERIC Educational Resources Information Center
Salcedo, Audy
2014-01-01
This study presents the results of the analysis of a group of teacher-made test questions for statistics courses at the university level. Teachers were asked to submit tests they had used in their previous two semesters. Ninety-seven tests containing 978 questions were gathered and classified according to the SOLO taxonomy (Biggs & Collis,…
Binomial test statistics using Psi functions
Bowman, Kimiko o
2007-01-01
For the negative binomial model (probability generating function (p + 1 - pt){sup -k}) a logarithmic derivative is the Psi function difference {psi}(k + x) - {psi}(k); this and its derivatives lead to a test statistic to decide on the validity of a specified model. The test statistic uses a data base so there exists a comparison available between theory and application. Note that the test function is not dominated by outliers. Applications to (i) Fisher's tick data, (ii) accidents data, (iii) Weldon's dice data are included.
Basic statistics for clinicians: 1. Hypothesis testing.
Guyatt, G; Jaeschke, R; Heddle, N; Cook, D; Shannon, H; Walter, S
1995-01-01
In the first of a series of four articles the authors explain the statistical concepts of hypothesis testing and p values. In many clinical trials investigators test a null hypothesis that there is no difference between a new treatment and a placebo or between two treatments. The result of a single experiment will almost always show some difference between the experimental and the control groups. Is the difference due to chance, or is it large enough to reject the null hypothesis and conclude that there is a true difference in treatment effects? Statistical tests yield a p value: the probability that the experiment would show a difference as great or greater than that observed if the null hypothesis were true. By convention, p values of less than 0.05 are considered statistically significant, and investigators conclude that there is a real difference. However, the smaller the sample size, the greater the chance of erroneously concluding that the experimental treatment does not differ from the control--in statistical terms, the power of the test may be inadequate. Tests of several outcomes from one set of data may lead to an erroneous conclusion that an outcome is significant if the joint probability of the outcomes is not taken into account. Hypothesis testing has limitations, which will be discussed in the next article in the series. PMID:7804919
Comments on the Statistical Significance Testing Articles.
ERIC Educational Resources Information Center
Knapp, Thomas R.
1998-01-01
Expresses a "middle-of-the-road" position on statistical significance testing, suggesting that it has its place but that confidence intervals are generally more useful. Identifies 10 errors of omission or commission in the papers reviewed that weaken the positions taken in their discussions. (SLD)
Statistical Tests of Reliability of NDE
NASA Technical Reports Server (NTRS)
Baaklini, George Y.; Klima, Stanley J.; Roth, Don J.; Kiser, James D.
1987-01-01
Capabilities of advanced material-testing techniques analyzed. Collection of four reports illustrates statistical method for characterizing flaw-detecting capabilities of sophisticated nondestructive evaluation (NDE). Method used to determine reliability of several state-of-the-art NDE techniques for detecting failure-causing flaws in advanced ceramic materials considered for use in automobiles, airplanes, and space vehicles.
Statistical treatment of fatigue test data
Raske, D.T.
1980-01-01
This report discussed several aspects of fatigue data analysis in order to provide a basis for the development of statistically sound design curves. Included is a discussion on the choice of the dependent variable, the assumptions associated with least squares regression models, the variability of fatigue data, the treatment of data from suspended tests and outlying observations, and various strain-life relations.
Mechanical Impact Testing: A Statistical Measurement
NASA Technical Reports Server (NTRS)
Engel, Carl D.; Herald, Stephen D.; Davis, S. Eddie
2005-01-01
In the decades since the 1950s, when NASA first developed mechanical impact testing of materials, researchers have continued efforts to gain a better understanding of the chemical, mechanical, and thermodynamic nature of the phenomenon. The impact mechanism is a real combustion ignition mechanism that needs understanding in the design of an oxygen system. The use of test data from this test method has been questioned due to lack of a clear method of application of the data and variability found between tests, material batches, and facilities. This effort explores a large database that has accumulated over a number of years and explores its overall nature. Moreover, testing was performed to determine the statistical nature of the test procedure to help establish sample size guidelines for material characterization. The current method of determining a pass/fail criterion based on either light emission or sound report or material charring is questioned.
Statistical tests for prediction of lignite quality
C.J. Kolovos
2007-06-15
Domestic lignite from large, bucket wheel excavators based open pit mines is the main fuel for electricity generation in Greece. Lignite from one or more mines may arrive at any power plant stockyard. The mixture obtained constitutes the lignite fuel fed to the power plant. The fuel is sampled in regular time intervals. These samples are considered as results of observations of values of spatial random variables. The aim was to form and statistically test many small sample populations. Statistical tests on the values of the humidity content, the ash-water free content, and the lower heating value of the lignite fuel indicated that the sample values form a normal population. The Kolmogorov-Smirnov test was applied for testing goodness-of-fit of sample distribution for a three year period and different power plants of the Kozani-Ptolemais area, western Macedonia, Greece. The normal distribution hypothesis can be widely accepted for forecasting the distribution of values of the basic quality characteristics even for a small number of samples.
Diagnostic rhyme test statistical analysis programs
NASA Astrophysics Data System (ADS)
Sim, A.; Bain, R.; Belyavin, A. J.; Pratt, R. L.
1991-08-01
The statistical techniques and associated computer programs used to analyze data from Diagnostic Rhyme Test (DRT) are described. The DRT is used extensively for assessing the intelligibility of military communications systems and became an accepted NATO standard for testing linear predictive coders. The DRT vocabulary comprises ninety six minimally contrasting rhyming word pairs, the initial consonants of which differ only by a single acoustic feature, or attribute. There are six such attributes: voicing, nasality, sustention, silibation, graveness, and compactness. The attribute voicing is present when the vocal cords are excited: in the word pair 'veal-feel', the consonant 'v' is voiced, but the constant 'f' is unvoiced. The procedure for the implementation of the DRT is presented. To ensure the stability of the results, tests using not less than eight talkers and eight listeners are conducted.
A Statistical Perspective on Highly Accelerated Testing.
Thomas, Edward V.
2015-02-01
Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use of highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning the
SANABRIA, FEDERICO; KILLEEN, PETER R.
2008-01-01
Despite being under challenge for the past 50 years, null hypothesis significance testing (NHST) remains dominant in the scientific field for want of viable alternatives. NHST, along with its significance level p, is inadequate for most of the uses to which it is put, a flaw that is of particular interest to educational practitioners who too often must use it to sanctify their research. In this article, we review the failure of NHST and propose prep, the probability of replicating an effect, as a more useful statistic for evaluating research and aiding practical decision making. PMID:19122766
Recent Tests for the Statistical Parton Distributions
NASA Astrophysics Data System (ADS)
Bourrely, Claude; Soffer, Jacques; Buccella, Franco
We compare some recent experimental results obtained at DESY, SLAC and Jefferson Lab., with the predictions of the statistical model, we have previously proposed. The result of this comparison is very satisfactory.
Assessing Statistical Aspects of Test Fairness with Structural Equation Modelling
ERIC Educational Resources Information Center
Kline, Rex B.
2013-01-01
Test fairness and test bias are not synonymous concepts. Test bias refers to statistical evidence that the psychometrics or interpretation of test scores depend on group membership, such as gender or race, when such differences are not expected. A test that is grossly biased may be judged to be unfair, but test fairness concerns the broader, more…
Multiple comparisons and nonparametric statistical tests on a programmable calculator.
Hurwitz, A
1987-03-01
Calculator programs are provided for statistical tests for comparing groups of data. These tests can be applied when t-tests are inappropriate, as for multiple comparisons, or for evaluating groups of data that are not distributed normally or have unequal variances. The programs, designed to run on the least expensive Hewlett-Packard programmable scientific calculator, Model HP-11C, should place these statistical tests within easy reach of most students and investigators. PMID:3560983
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
ERIC Educational Resources Information Center
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
Statistical Testing in the Behavioral Sciences: Textbook Developments.
ERIC Educational Resources Information Center
Huberty, Carl J.
Textbooks that have been and are being used by students in education and psychology to learn about statistical testing are reviewed. Twenty-eight textbooks published prior to 1950 were reviewed. These textbooks were found to focus on descriptive methods, to concentrate on educational testing, and to present limited formal statistical formulation.…
Incongruence between test statistics and P values in medical papers
García-Berthou, Emili; Alcaraz, Carles
2004-01-01
Background Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods We checked the congruence of statistical results reported in all the papers of volumes 409–412 of Nature (2001) and a random sample of 63 results from volumes 322–323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits). Results 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting. Conclusions This incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued. PMID:15169550
A Comparison of Statistical Significance Tests for Selecting Equating Functions
ERIC Educational Resources Information Center
Moses, Tim
2009-01-01
This study compared the accuracies of nine previously proposed statistical significance tests for selecting identity, linear, and equipercentile equating functions in an equivalent groups equating design. The strategies included likelihood ratio tests for the loglinear models of tests' frequency distributions, regression tests, Kolmogorov-Smirnov…
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
Advances in Testing the Statistical Significance of Mediation Effects
ERIC Educational Resources Information Center
Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.
2006-01-01
P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…
Chi-Square Statistics, Tests of Hypothesis and Technology.
ERIC Educational Resources Information Center
Rochowicz, John A.
The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…
BIAZA statistics guidelines: toward a common application of statistical tests for zoo research.
Plowman, Amy B
2008-05-01
Zoo research presents many statistical challenges, mostly arising from the need to work with small sample sizes. Efforts to overcome these often lead to the misuse of statistics including pseudoreplication, inappropriate pooling, assumption violation or excessive Type II errors because of using tests with low power to avoid assumption violation. To tackle these issues and make some general statistical recommendations for zoo researchers, the Research Group of the British and Irish Association of Zoos and Aquariums (BIAZA) conducted a workshop. Participants included zoo-based researchers, university academics with zoo interests and three statistical experts. The result was a BIAZA publication Zoo Research Guidelines: Statistics for Typical Zoo Datasets (Plowman [2006] Zoo research guidelines: statistics for zoo datasets. London: BIAZA), which provides advice for zoo researchers on study design and analysis to ensure appropriate and rigorous use of statistics. The main recommendations are: (1) that many typical zoo investigations should be conducted as single case/small N randomized designs, analyzed with randomization tests, (2) that when comparing complete time budgets across conditions in behavioral studies, G tests and their derivatives are the most appropriate statistical tests and (3) that in studies involving multiple dependent and independent variables there are usually no satisfactory alternatives to traditional parametric tests and, despite some assumption violations, it is better to use these tests with careful interpretation, than to lose information through not testing at all. The BIAZA guidelines were recommended by American Association of Zoos and Aquariums (AZA) researchers at the AZA Annual Conference in Tampa, FL, September 2006, and are free to download from www.biaza.org.uk. PMID:19360620
On Interpreting Test Scores as Social Indicators: Statistical Considerations.
ERIC Educational Resources Information Center
Spencer, Bruce D.
1983-01-01
Because test scores are ordinal not cordinal attributes, the average test score often is a misleading way to summarize the scores of a group of individuals. Similarly, correlation coefficients may be misleading summary measures of association between test scores. Proper, readily interpretable, summary statistics are developed from a theory of…
ERIC Educational Resources Information Center
Hong, Eunsook
A path analytic model of state test anxiety was tested in 169 college students who were enrolled in statistics courses. Variables in the model included gender, mathematics ability, trait test anxiety (trait worry and trait emotionality as separate variables), statistics course anxiety, statistics achievement (scores on midterm examinations),…
Multiple statistical tests: lessons from a d20.
Madan, Christopher R
2016-01-01
Statistical analyses are often conducted with α=.05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided dice (or `d20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is 1/20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests. PMID:27347382
Multiple statistical tests: lessons from a d20
Madan, Christopher R.
2016-01-01
Statistical analyses are often conducted with α=.05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided dice (or `d20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is 1/20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests. PMID:27347382
Statistical Evaluation of Molecular Contamination During Spacecraft Thermal Vacuum Test
NASA Technical Reports Server (NTRS)
Chen, Philip; Hedgeland, Randy; Montoya, Alex; Roman-Velazquez, Juan; Dunn, Jamie; Colony, Joe; Petitto, Joseph
1999-01-01
The purpose of this paper is to evaluate the statistical molecular contamination data with a goal to improve spacecraft contamination control. The statistical data was generated in typical thermal vacuum tests at the National Aeronautics and Space Administration, Goddard Space Flight Center (GSFC). The magnitude of material outgassing was measured using a Quartz Crystal Microbalance (QCNO device during the test. A solvent rinse sample was taken at the conclusion of each test. Then detailed qualitative and quantitative measurements were obtained through chemical analyses. All data used in this study encompassed numerous spacecraft tests in recent years.
Statistical Evaluation of Molecular Contamination During Spacecraft Thermal Vacuum Test
NASA Technical Reports Server (NTRS)
Chen, Philip; Hedgeland, Randy; Montoya, Alex; Roman-Velazquez, Juan; Dunn, Jamie; Colony, Joe; Petitto, Joseph
1997-01-01
The purpose of this paper is to evaluate the statistical molecular contamination data with a goal to improve spacecraft contamination control. The statistical data was generated in typical thermal vacuum tests at the National Aeronautics and Space Administration, Goddard Space Flight Center (GSFC). The magnitude of material outgassing was measured using a Quartz Crystal Microbalance (QCM) device during the test. A solvent rinse sample was taken at the conclusion of the each test. Then detailed qualitative and quantitative measurements were obtained through chemical analyses. All data used in this study encompassed numerous spacecraft tests in recent years.
Statistical Evaluation of Molecular Contamination During Spacecraft Thermal Vacuum Test
NASA Technical Reports Server (NTRS)
Chen, Philip; Hedgeland, Randy; Montoya, Alex; Roman-Velazquez, Juan; Dunn, Jamie; Colony, Joe; Petitto, Joseph
1998-01-01
The purpose of this paper is to evaluate the statistical molecular contamination data with a goal to improve spacecraft contamination control. The statistical data was generated in typical thermal vacuum tests at the National Aeronautics and Space Administration, Goddard Space Flight Center (GSFC). The magnitude of material outgassing was measured using a Quartz Crystal Microbalance (QCM) device during the test. A solvent rinse sample was taken at the conclusion of each test. Then detailed qualitative and quantitative measurements were obtained through chemical analyses. All data used in this study encompassed numerous spacecraft tests in recent years.
Your Chi-Square Test Is Statistically Significant: Now What?
ERIC Educational Resources Information Center
Sharpe, Donald
2015-01-01
Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…
Statistical significance test for transition matrices of atmospheric Markov chains
NASA Technical Reports Server (NTRS)
Vautard, Robert; Mo, Kingtse C.; Ghil, Michael
1990-01-01
Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.
Pass-Fail Testing: Statistical Requirements and Interpretations
Gilliam, David; Leigh, Stefan; Rukhin, Andrew; Strawderman, William
2009-01-01
Performance standards for detector systems often include requirements for probability of detection and probability of false alarm at a specified level of statistical confidence. This paper reviews the accepted definitions of confidence level and of critical value. It describes the testing requirements for establishing either of these probabilities at a desired confidence level. These requirements are computable in terms of functions that are readily available in statistical software packages and general spreadsheet applications. The statistical interpretations of the critical values are discussed. A table is included for illustration, and a plot is presented showing the minimum required numbers of pass-fail tests. The results given here are applicable to one-sided testing of any system with performance characteristics conforming to a binomial distribution. PMID:27504221
ERIC Educational Resources Information Center
Gordon, Howard R. D.
A random sample of 113 members of the American Vocational Education Research Association (AVERA) was surveyed to obtain baseline information regarding AVERA members' perceptions of statistical significance tests. The Psychometrics Group Instrument was used to collect data from participants. Of those surveyed, 67% were male, 93% had earned a…
testing the regional application of site statistics with satellite data
NASA Astrophysics Data System (ADS)
Kinne, S.; Paradise, S.
2009-04-01
Remote sensing from ground sites is often associated with a time record length and/or accuracy to its data, which is superior to that of remote sensing from space. Thus, in those cases ground measurement (network-) data are applied to constrain retrieval assumptions and/or to extend satellite data in time. Alternately, this combination can be and has been used to explore the potential application of local site statistics to surrounding regions. As a demonstrator MISR sensor statistical maps of the retrieved aerosol optical depth are applied the test the regional representation of site statistics for aerosol optical depth detected at AERONET (sub-photometer) and EARLINET (lidar) sites. The regional representation tests explore local applicability for regions from 100 to 1000 km in diameter based on an analysis of averages for relative error and relative bias.
A Statistical Approach to Establishing Subsystem Environmental Test Specifications
NASA Technical Reports Server (NTRS)
Keegan, W. B.
1974-01-01
Results are presented of a research task to evaluate structural responses at various subsystem mounting locations during spacecraft level test exposures to the environments of mechanical shock, acoustic noise, and random vibration. This statistical evaluation is presented in the form of recommended subsystem test specifications for these three environments as normalized to a reference set of spacecraft test levels and are thus suitable for extrapolation to a set of different spacecraft test levels. The recommendations are dependent upon a subsystem's mounting location in a spacecraft, and information is presented on how to determine this mounting zone for a given subsystem.
Innovative role of statistics in acid rain performance testing
Warren-Hicks, W.; Etchison, T.; Lieberman, E.R.
1995-12-31
Title IV of the Clean Air Act Amendments (CAAAs) of 1990 mandated that affected electric utilities reduce sulfur dioxide (SO{sub 2}) and nitrogen oxide (NO{sub x}) emissions, the primary precursors of acidic deposition, and included an innovative market-based SO{sub 2} regulatory program. A central element of the Acid Rain Program is the requirement that affected utility units install CEMS. This paper describes how the Acid Rain Regulations incorporated statistical procedures in the performance tests for continuous emissions monitoring systems (CEMS) and how statistical analysis was used to assess the appropriateness, stringency, and potential impact of various performance tests and standards that were considered for inclusion in the Acid Rain Regulations. Described here is the statistical analysis that was used to set a relative accuracy standard, establish the calculation procedures for filling in missing data when a monitor malfunctions, and evaluate the performance tests applied to petitions for alternative monitoring systems. The paper concludes that the statistical evaluations of proposed provisions of the Acid Rain Regulations resulted in the adoption of performance tests and standards that were scientifically substantiated, workable, and effective.
Comparison of statistical tests for disease association with rare variants.
Basu, Saonli; Pan, Wei
2011-11-01
In anticipation of the availability of next-generation sequencing data, there is increasing interest in investigating association between complex traits and rare variants (RVs). In contrast to association studies for common variants (CVs), due to the low frequencies of RVs, common wisdom suggests that existing statistical tests for CVs might not work, motivating the recent development of several new tests for analyzing RVs, most of which are based on the idea of pooling/collapsing RVs. However, there is a lack of evaluations of, and thus guidance on the use of, existing tests. Here we provide a comprehensive comparison of various statistical tests using simulated data. We consider both independent and correlated rare mutations, and representative tests for both CVs and RVs. As expected, if there are no or few non-causal (i.e. neutral or non-associated) RVs in a locus of interest while the effects of causal RVs on the trait are all (or mostly) in the same direction (i.e. either protective or deleterious, but not both), then the simple pooled association tests (without selecting RVs and their association directions) and a new test called kernel-based adaptive clustering (KBAC) perform similarly and are most powerful; KBAC is more robust than simple pooled association tests in the presence of non-causal RVs; however, as the number of non-causal CVs increases and/or in the presence of opposite association directions, the winners are two methods originally proposed for CVs and a new test called C-alpha test proposed for RVs, each of which can be regarded as testing on a variance component in a random-effects model. Interestingly, several methods based on sequential model selection (i.e. selecting causal RVs and their association directions), including two new methods proposed here, perform robustly and often have statistical power between those of the above two classes. PMID:21769936
Statistical modeling for particle impact noise detection testing
Prairie, R.R. ); Zimmer, W.J. )
1990-01-01
Particle Impact Noise Detection (PIND) testing is widely used to test electronic devices for the presence of conductive particles which can cause catastrophic failure. This paper develops a statistical model based on the rate of particles contaminating the part, the rate of particles induced by the test vibration, the escape rate, and the false alarm rate. Based on data from a large number of PIND tests for a canned transistor, the model is shown to fit the observed results closely. Knowledge of the parameters for which this fit is made is important in evaluating the effectiveness of the PIND test procedure and for developing background judgment about the performance of the PIND test. Furthermore, by varying the input parameters to the model, the resulting yield, failure rate and percent fallout can be examined and used to plan and implement PIND test programs.
A critique of statistical hypothesis testing in clinical research
Raha, Somik
2011-01-01
Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152
Statistical Treatment of Earth Observing System Pyroshock Separation Test Data
NASA Technical Reports Server (NTRS)
McNelis, Anne M.; Hughes, William O.
1998-01-01
The Earth Observing System (EOS) AM-1 spacecraft for NASA's Mission to Planet Earth is scheduled to be launched on an Atlas IIAS vehicle in June of 1998. One concern is that the instruments on the EOS spacecraft are sensitive to the shock-induced vibration produced when the spacecraft separates from the launch vehicle. By employing unique statistical analysis to the available ground test shock data, the NASA Lewis Research Center found that shock-induced vibrations would not be as great as the previously specified levels of Lockheed Martin. The EOS pyroshock separation testing, which was completed in 1997, produced a large quantity of accelerometer data to characterize the shock response levels at the launch vehicle/spacecraft interface. Thirteen pyroshock separation firings of the EOS and payload adapter configuration yielded 78 total measurements at the interface. The multiple firings were necessary to qualify the newly developed Lockheed Martin six-hardpoint separation system. Because of the unusually large amount of data acquired, Lewis developed a statistical methodology to predict the maximum expected shock levels at the interface between the EOS spacecraft and the launch vehicle. Then, this methodology, which is based on six shear plate accelerometer measurements per test firing at the spacecraft/launch vehicle interface, was used to determine the shock endurance specification for EOS. Each pyroshock separation test of the EOS spacecraft simulator produced its own set of interface accelerometer data. Probability distributions, histograms, the median, and higher order moments (skew and kurtosis) were analyzed. The data were found to be lognormally distributed, which is consistent with NASA pyroshock standards. Each set of lognormally transformed test data produced was analyzed to determine if the data should be combined statistically. Statistical testing of the data's standard deviations and means (F and t testing, respectively) determined if data sets were
Statistical process control testing of electronic security equipment
Murray, D.W.; Spencer, D.D.
1994-06-01
Statistical Process Control testing of manufacturing processes began back in the 1940`s with the development of Process Control Charts by Dr. Walter A. Shewart. Sandia National Laboratories has developed an application of the SPC method for performance testing of electronic security equipment. This paper documents the evaluation of this testing methodology applied to electronic security equipment and an associated laptop computer-based system for obtaining and analyzing the test data. Sandia developed this SPC sensor performance testing method primarily for use on portal metal detectors, but, has evaluated it for testing of an exterior intrusion detection sensor and other electronic security devices. This method is an alternative to the traditional binomial (alarm or no-alarm) performance testing. The limited amount of information in binomial data drives the number of tests necessary to meet regulatory requirements to unnecessarily high levels. For example, a requirement of a 0.85 probability of detection with a 90% confidence requires a minimum of 19 alarms out of 19 trials. By extracting and analyzing measurement (variables) data whenever possible instead of the more typical binomial data, the user becomes more informed about equipment health with fewer tests (as low as five per periodic evaluation).
Statistical tests for measures of colocalization in biological microscopy.
McDonald, John H; Dunn, Kenneth W
2013-12-01
Colocalization analysis is the most common technique used for quantitative analysis of fluorescence microscopy images. Several metrics have been developed for measuring the colocalization of two probes, including Pearson's correlation coefficient (PCC) and Manders' correlation coefficient (MCC). However, once measured, the meaning of these measurements can be unclear; interpreting PCC or MCC values requires the ability to evaluate the significance of a particular measurement, or the significance of the difference between two sets of measurements. In previous work, we showed how spatial autocorrelation confounds randomization techniques commonly used for statistical analysis of colocalization data. Here we use computer simulations of biological images to show that the Student's one-sample t-test can be used to test the significance of PCC or MCC measurements of colocalization, and the Student's two-sample t-test can be used to test the significance of the difference between measurements obtained under different experimental conditions. PMID:24117417
n-dimensional Statistical Inverse Graphical Hydraulic Test Simulator
2012-09-12
nSIGHTS (n-dimensional Statistical Inverse Graphical Hydraulic Test Simulator) is a comprehensive well test analysis software package. It provides a user-interface, a well test analysis model and many tools to analyze both field and simulated data. The well test analysis model simulates a single-phase, one-dimensional, radial/non-radial flow regime, with a borehole at the center of the modeled flow system. nSIGHTS solves the radially symmetric n-dimensional forward flow problem using a solver based on a graph-theoretic approach. The results of the forward simulation are pressure, and flow rate, given all the input parameters. The parameter estimation portion of nSIGHTS uses a perturbation-based approach to interpret the best-fit well and reservoir parameters, given an observed dataset of pressure and flow rate.
n-dimensional Statistical Inverse Graphical Hydraulic Test Simulator
Energy Science and Technology Software Center (ESTSC)
2012-09-12
nSIGHTS (n-dimensional Statistical Inverse Graphical Hydraulic Test Simulator) is a comprehensive well test analysis software package. It provides a user-interface, a well test analysis model and many tools to analyze both field and simulated data. The well test analysis model simulates a single-phase, one-dimensional, radial/non-radial flow regime, with a borehole at the center of the modeled flow system. nSIGHTS solves the radially symmetric n-dimensional forward flow problem using a solver based on a graph-theoretic approach.more » The results of the forward simulation are pressure, and flow rate, given all the input parameters. The parameter estimation portion of nSIGHTS uses a perturbation-based approach to interpret the best-fit well and reservoir parameters, given an observed dataset of pressure and flow rate.« less
Statistical analysis of test data for APM rod issue
Edwards, T.B.; Harris, S.P.; Reeve, C.P.
1992-05-01
The uncertainty associated with the use of the K-Reactor axial power monitors (APMs) to measure roof-top-ratios is investigated in this report. Internal heating test data acquired under both DC-flow conditions and AC-flow conditions have been analyzed. These tests were conducted to simulate gamma heating at the lower power levels planned for reactor operation. The objective of this statistical analysis is to investigate the relationship between the observed and true roof-top-ratio (RTR) values and associated uncertainties at power levels within this lower operational range. Conditional on a given, known power level, a prediction interval for the true RTR value corresponding to a new, observed RTR is given. This is done for a range of power levels. Estimates of total system uncertainty are also determined by combining the analog-to-digital converter uncertainty with the results from the test data.
Statistical Tests of Conditional Independence between Responses and/or Response Times on Test Items
ERIC Educational Resources Information Center
van der Linden, Wim J.; Glas, Cees A. W.
2010-01-01
Three plausible assumptions of conditional independence in a hierarchical model for responses and response times on test items are identified. For each of the assumptions, a Lagrange multiplier test of the null hypothesis of conditional independence against a parametric alternative is derived. The tests have closed-form statistics that are easy to…
Quantum Statistical Testing of a Quantum Random Number Generator
Humble, Travis S
2014-01-01
The unobservable elements in a quantum technology, e.g., the quantum state, complicate system verification against promised behavior. Using model-based system engineering, we present methods for verifying the opera- tion of a prototypical quantum random number generator. We begin with the algorithmic design of the QRNG followed by the synthesis of its physical design requirements. We next discuss how quantum statistical testing can be used to verify device behavior as well as detect device bias. We conclude by highlighting how system design and verification methods must influence effort to certify future quantum technologies.
Statistical tests for power-law cross-correlated processes
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.
Statistical tests for power-law cross-correlated processes.
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρ(DCCA)(T,n), where T is the total length of the time series and n the window size. For ρ(DCCA)(T,n), we numerically calculated the Cauchy inequality -1 ≤ ρ(DCCA)(T,n) ≤ 1. Here we derive -1 ≤ ρ DCCA)(T,n) ≤ 1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρ(DCCA) within which the cross-correlations become statistically significant. For overlapping windows we numerically determine-and for nonoverlapping windows we derive--that the standard deviation of ρ(DCCA)(T,n) tends with increasing T to 1/T. Using ρ(DCCA)(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series. PMID:22304166
Jones, P L; Swain, W T; Trammell, C J
1999-01-01
When a population is too large for exhaustive study, as is the case for all possible uses of a software system, a statistically correct sample must be drawn as a basis for inferences about the population. A Markov chain usage model is an engineering formalism that represents the population of possible uses for which a product is to be tested. In statistical testing of software based on a Markov chain usage model, the rich body of analytical results available for Markov chains provides numerous insights that can be used in both product development and test planing. A usage model is based on specifications rather than code, so insights that result from model building can inform product decisions in the early stages of a project when the opportunity to prevent problems is the greatest. Statistical testing based on a usage model provides a sound scientific basis for quantifying the reliability of software. PMID:10459417
Spatial patterns of nonstationarity: does the statistical test matter?
NASA Astrophysics Data System (ADS)
Cahill, A. T.; Faloon, A.; Brumbelow, J. K.
2012-12-01
In many regions, extreme rainfalls are generated by large-scale weather features which are significantly larger than the separation among the set of rain gages in the region; it would be expected therefore that these rain gages would exhibit similar statistical behavior. When time series of extreme rainfalls measured at these gages are tested individually for nonstationarity, however, it is often found that any spatial pattern of the nonstationarity is poorly defined. The fact that extremes of rainfall at one location are increasing in time is not strongly predictive of the behavior of the extreme rainfall at neighboring rain gages, which may be increasing, decreasing or unchanging, when the individual time series are considered by themselves. Using rainfall data sets from the southern United States as a test case, we present work on detection of nonstationarity in rainfall extremes which takes into account the disparate answers individual tests of nonstationarity can give, given our assumption of larger-scale precipitation fields driving the behavior of point observations.
Testing the validity of Bose-Einstein statistics in molecules
NASA Astrophysics Data System (ADS)
Cancio Pastor, P.; Galli, I.; Giusfredi, G.; Mazzotti, D.; De Natale, P.
2015-12-01
The search for small violations of the validity of the symmetrization postulate and of the spin-statistics connection (SSC) has been addressed in the last four decades by experimental tests performed in different physical systems of identical fermions or bosons. In parallel and consequently, theories extending the quantum mechanics to a more general level have been proposed to explain such possible violations. In this paper, we present the most stringent test to a possible violation of the SSC under permutation of the bosonic 16O nuclei in the 12CO162 molecule. An upper limit of 3.8 ×10-12 for an SSC-anomalous CO2 molecule is obtained using saturated-absorption cavity ring-down spectroscopy in the SSC-forbidden (0001 -0000 ) R (25) rovibrational transition of 12CO162 at a 4.25 -μ m wavelength. Quantum mechanics implications of this result are discussed in the frame of the q -mutator theory. Finally, the perspective of stringent experimental tests of the symmetrization postulate in molecules that contain three or more identical nuclei is discussed.
Ergodicity testing for anomalous diffusion: Small sample statistics
NASA Astrophysics Data System (ADS)
Janczura, Joanna; Weron, Aleksander
2015-04-01
The analysis of trajectories recorded in experiments often requires calculating time averages instead of ensemble averages. According to the Boltzmann hypothesis, they are equivalent only under the assumption of ergodicity. In this paper, we implement tools that allow to study ergodic properties. This analysis is conducted in two classes of anomalous diffusion processes: fractional Brownian motion and subordinated Ornstein-Uhlenbeck process. We show that only first of them is ergodic. We demonstrate this by applying rigorous statistical methods: mean square displacement, confidence intervals, and dynamical functional test. Our methodology is universal and can be implemented for analysis of many experimental data not only if a large sample is available but also when there are only few trajectories recorded.
Ergodicity testing for anomalous diffusion: small sample statistics.
Janczura, Joanna; Weron, Aleksander
2015-04-14
The analysis of trajectories recorded in experiments often requires calculating time averages instead of ensemble averages. According to the Boltzmann hypothesis, they are equivalent only under the assumption of ergodicity. In this paper, we implement tools that allow to study ergodic properties. This analysis is conducted in two classes of anomalous diffusion processes: fractional Brownian motion and subordinated Ornstein-Uhlenbeck process. We show that only first of them is ergodic. We demonstrate this by applying rigorous statistical methods: mean square displacement, confidence intervals, and dynamical functional test. Our methodology is universal and can be implemented for analysis of many experimental data not only if a large sample is available but also when there are only few trajectories recorded. PMID:25877558
Testing Punctuated Equilibrium Theory Using Evolutionary Activity Statistics
NASA Astrophysics Data System (ADS)
Woodberry, O. G.; Korb, K. B.; Nicholson, A. E.
The Punctuated Equilibrium hypothesis (Eldredge and Gould,1972) asserts that most evolutionary change occurs during geologically rapid speciation events, with species exhibiting stasis most of the time. Punctuated Equilibrium is a natural extension of Mayr's theories on peripatric speciation via the founder effect, (Mayr, 1963; Eldredge and Gould, 1972) which associates changes in diversity to a population bottleneck. That is, while the formation of a foundation bottleneck brings an initial loss of genetic variation, it may subsequently result in the emergence of a child species distinctly different from its parent species. In this paper we adapt Bedau's evolutionary activity statistics (Bedau and Packard, 1991) to test these effects in an ALife simulation of speciation. We find a relative increase in evolutionary activity during speciations events, indicating that punctuation is occurring.
Statistical methods for the blood beryllium lymphocyte proliferation test
Frome, E.L.; Smith, M.H.; Littlefield, L.G.
1996-10-01
The blood beryllium lymphocyte proliferation test (BeLPT) is a modification of the standard lymphocyte proliferation test that is used to identify persons who may have chronic beryllium disease. A major problem in the interpretation of BeLPT test results is outlying data values among the replicate well counts ({approx}7%). A log-linear regression model is used to describe the expected well counts for each set of Be exposure conditions, and the variance of the well counts is proportional to the square of the expected count. Two outlier-resistant regression methods are used to estimate stimulation indices (SIs) and the coefficient of variation. The first approach uses least absolute values (LAV) on the log of the well counts as a method for estimation; the second approach uses a resistant regression version of maximum quasi-likelihood estimation. A major advantage of these resistant methods is that they make it unnecessary to identify and delete outliers. These two new methods for the statistical analysis of the BeLPT data and the current outlier rejection method are applied to 173 BeLPT assays. We strongly recommend the LAV method for routine analysis of the BeLPT. Outliers are important when trying to identify individuals with beryllium hypersensitivity, since these individuals typically have large positive SI values. A new method for identifying large SIs using combined data from the nonexposed group and the beryllium workers is proposed. The log(SI)s are described with a Gaussian distribution with location and scale parameters estimated using resistant methods. This approach is applied to the test data and results are compared with those obtained from the current method. 24 refs., 9 figs., 8 tabs.
NASA Astrophysics Data System (ADS)
Namba, Kazuteru; Ito, Hideo
This paper proposes a method providing efficient test compression. The proposed method is for robust testable path delay fault testing with scan design facilitating two-pattern testing. In the proposed method, test data are interleaved before test compression using statistical coding. This paper also presents test architecture for two-pattern testing using the proposed method. The proposed method is experimentally evaluated from several viewpoints such as compression rates, test application time and area overhead. For robust testable path delay fault testing on 11 out of 20 ISCAS89 benchmark circuits, the proposed method provides better compression rates than the existing methods such as Huffman coding, run-length coding, Golomb coding, frequency-directed run-length (FDR) coding and variable-length input Huffman coding (VIHC).
The association between size of test chamber and patch test reaction: a statistical reanalysis.
Gefeller, O; Pfahlberg, A; Geier, J; Brasch, J; Uter, W
1999-01-01
A recent study by Brasch and co-workers reported on the association between size of test chamber and patch test reaction. The investigators interpreted their data on 495 patients as having conclusively shown that standard preparations of fragrance mix, wool wax alcohols, Kathon CG and formaldehyde led to more positive test reactions when large Finn Chambers were used for patch testing. We have scrutinized the statistical aspects of this study and conclude that the authors should have adopted a statistical approach suitable to analyse dependent samples. After explaining the correct methodological way of dealing with quadratic contingency tables formed by 2 dependent samples, we reanalyze the data accordingly and compare the results to those of the original paper. Based on this reanalysis, the conclusions are more complex: the reaction pattern for the fragrance mix and wool wax alcohols is significantly different between small and large test chambers; however, this discrepancy arises primarily from changing weak positive reactions with small chambers to strong positive reactions with large chambers. For formaldehyde, no relationship between chamber size and patch test reaction was found in the data, while for Kathon CG, statistical evidence is borderline that more positive test reactions are yielded by large test chambers than by small ones. PMID:9928799
A statistical design for testing apomictic diversification through linkage analysis.
Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling
2014-03-01
The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis. PMID:23271157
Statistical tests of additional plate boundaries from plate motion inversions
NASA Technical Reports Server (NTRS)
Stein, S.; Gordon, R. G.
1984-01-01
The application of the F-ratio test, a standard statistical technique, to the results of relative plate motion inversions has been investigated. The method tests whether the improvement in fit of the model to the data resulting from the addition of another plate to the model is greater than that expected purely by chance. This approach appears to be useful in determining whether additional plate boundaries are justified. Previous results have been confirmed favoring separate North American and South American plates with a boundary located beween 30 N and the equator. Using Chase's global relative motion data, it is shown that in addition to separate West African and Somalian plates, separate West Indian and Australian plates, with a best-fitting boundary between 70 E and 90 E, can be resolved. These results are generally consistent with the observation that the Indian plate's internal deformation extends somewhat westward of the Ninetyeast Ridge. The relative motion pole is similar to Minster and Jordan's and predicts the NW-SE compression observed in earthquake mechanisms near the Ninetyeast Ridge.
A Unifying Framework for Teaching Nonparametric Statistical Tests
ERIC Educational Resources Information Center
Bargagliotti, Anna E.; Orrison, Michael E.
2014-01-01
Increased importance is being placed on statistics at both the K-12 and undergraduate level. Research divulging effective methods to teach specific statistical concepts is still widely sought after. In this paper, we focus on best practices for teaching topics in nonparametric statistics at the undergraduate level. To motivate the work, we…
Shaikh, Masood Ali
2016-04-01
Statistical tests help infer meaningful conclusions from studies conducted and data collected. This descriptive study analyzed the type of statistical tests used and the statistical software utilized for analysis reported in the original articles published in 2014 by the three Medline-indexed journals of Pakistan. Cumulatively, 466 original articles were published in 2014. The most frequently reported statistical tests for original articles by all three journals were bivariate parametric and non-parametric tests i.e. involving comparisons between two groups e.g. Chi-square test, t-test, and various types of correlations. Cumulatively, 201 (43.1%) articles used these tests. SPSS was the primary choice for statistical analysis, as it was exclusively used in 374 (80.3%) original articles. There has been a substantial increase in the number of articles published, and in the sophistication of statistical tests used in the articles published in the Pakistani Medline indexed journals in 2014, compared to 2007. PMID:27122277
Development and testing of improved statistical wind power forecasting methods.
Mendes, J.; Bessa, R.J.; Keko, H.; Sumaili, J.; Miranda, V.; Ferreira, C.; Gama, J.; Botterud, A.; Zhou, Z.; Wang, J.
2011-12-06
Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highly dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios
Statistical tests for detecting movements in repeatedly measured geodetic networks
NASA Astrophysics Data System (ADS)
Niemeier, W.
1981-01-01
Geodetic networks with two or more measuring epochs can be found rather frequently, for example in connection with the investigation of recent crustal movements, in the field of monitoring problems in engineering surveying or in ordinary control networks. For these repeatedly measured networks the so-called congruency problem has to be solved, i.e. possible changes in the geometry of the net have to be found. In practice distortions of bench marks and an extension or densification of the net (differences in the 1st-order design) and/or changes in the measuring elements or techniques (differences in the 2nd-order design) can frequently be found between different epochs. In this paper a rigorous mathematical procedure is presented for this congruency analysis of multiple measured networks, taking into account these above-mentioned differences in the network design. As a first step, statistical tests are carried out to detect the epochs with departures from congruency. As a second step the individual points with significant movements within these critical epochs can be identified. A numerical example for the analysis of a monitoring network with 9 epochs is given.
Testing the Limits of Statistical Learning for Word Segmentation
Johnson, Elizabeth K.; Tyler, Michael D.
2009-01-01
Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, neither age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested. PMID:20136930
Statistical Measures, Hypotheses, and Tests in Applied Research
ERIC Educational Resources Information Center
Saville, David J.; Rowarth, Jacqueline S.
2008-01-01
This article reviews and discusses the use of statistical concepts in a natural resources and life sciences journal on the basis of a census of the articles published in a recent issue of the "Agronomy Journal" and presents a flow chart and a graph that display the inter-relationships between the most commonly used statistical terms. It also…
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
TESTING THE DARK ENERGY WITH GRAVITATIONAL LENSING STATISTICS
Cao Shuo; Zhu Zonghong; Covone, Giovanni
2012-08-10
We study the redshift distribution of two samples of early-type gravitational lenses, extracted from a larger collection of 122 systems, to constrain the cosmological constant in the {Lambda}CDM model and the parameters of a set of alternative dark energy models (XCDM, Dvali-Gabadadze-Porrati, and Ricci dark energy models), in a spatially flat universe. The likelihood is maximized for {Omega}{sub {Lambda}} = 0.70 {+-} 0.09 when considering the sample excluding the Sloan Lens ACS systems (known to be biased toward large image-separation lenses) and no-evolution, and {Omega}{sub {Lambda}} = 0.81 {+-} 0.05 when limiting to gravitational lenses with image separation {Delta}{theta} > 2'' and no-evolution. In both cases, results accounting for galaxy evolution are consistent within 1{sigma}. The present test supports the accelerated expansion, by excluding the null hypothesis (i.e., {Omega}{sub {Lambda}} = 0) at more than 4{sigma}, regardless of the chosen sample and assumptions on the galaxy evolution. A comparison between competitive world models is performed by means of the Bayesian information criterion. This shows that the simplest cosmological constant model-that has only one free parameter-is still preferred by the available data on the redshift distribution of gravitational lenses. We perform an analysis of the possible systematic effects, finding that the systematic errors due to sample incompleteness, galaxy evolution, and model uncertainties approximately equal the statistical errors, with present-day data. We find that the largest sources of systemic errors are the dynamical normalization and the high-velocity cutoff factor, followed by the faint-end slope of the velocity dispersion function.
Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.
ERIC Educational Resources Information Center
Breunig, Nancy A.
Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…
Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic
ERIC Educational Resources Information Center
Satorra, Albert; Bentler, Peter M.
2010-01-01
A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…
Stork, LeAnna M.; Gennings, Chris; Carchman, Richard; Carter, Jr., Walter H.; Pounds, Joel G.; Mumtaz, Moiz
2006-12-01
Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups.The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.
ERIC Educational Resources Information Center
Sullivan, Jeremy R.
2001-01-01
Summarizes the post-1994 literature in psychology and education regarding statistical significance testing, emphasizing limitations and defenses of statistical testing and alternatives or supplements to statistical significance testing. (SLD)
Statistical algorithms for a comprehensive test ban treaty discrimination framework
Foote, N.D.; Anderson, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.; Hagedorn, D.N.
1996-10-01
Seismic discrimination is the process of identifying a candidate seismic event as an earthquake or explosion using information from seismic waveform features (seismic discriminants). In the CTBT setting, low energy seismic activity must be detected and identified. A defensible CTBT discrimination decision requires an understanding of false-negative (declaring an event to be an earthquake given it is an explosion) and false-position (declaring an event to be an explosion given it is an earthquake) rates. These rates are derived from a statistical discrimination framework. A discrimination framework can be as simple as a single statistical algorithm or it can be a mathematical construct that integrates many different types of statistical algorithms and CTBT technologies. In either case, the result is the identification of an event and the numerical assessment of the accuracy of an identification, that is, false-negative and false-positive rates. In Anderson et al., eight statistical discrimination algorithms are evaluated relative to their ability to give results that effectively contribute to a decision process and to be interpretable with physical (seismic) theory. These algorithms can be discrimination frameworks individually or components of a larger framework. The eight algorithms are linear discrimination (LDA), quadratic discrimination (QDA), variably regularized discrimination (VRDA), flexible discrimination (FDA), logistic discrimination, K-th nearest neighbor (KNN), kernel discrimination, and classification and regression trees (CART). In this report, the performance of these eight algorithms, as applied to regional seismic data, is documented. Based on the findings in Anderson et al. and this analysis: CART is an appropriate algorithm for an automated CTBT setting.
Statistical Revisions in the Washington Pre-College Testing Program.
ERIC Educational Resources Information Center
Beanblossom, Gary F.; And Others
The Washington Pre-College (WPC) program decided, in fall 1967, to inaugurate in April 1968 the testing of high school students during the spring of their junior year. The advantages of this shift from senior year testing were to provide guidance data for earlier, more extensive use in high school and to make these data available to colleges at…
Statistics of sampling for microbiological testing of foodborne pathogens
Technology Transfer Automated Retrieval System (TEKTRAN)
Despite the many recent advances in protocols for testing for pathogens in foods, a number of challenges still exist. For example, the microbiological safety of food cannot be completely ensured by testing because microorganisms are not evenly distributed throughout the food. Therefore, since it i...
Estimating Statistical Power When Making Adjustments for Multiple Tests
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…
STATISTICAL ANALYSIS OF 40 CFR 60 COMPLIANCE TEST AUDIT DATA
The U.S. Environmental Protection Agency (EPA) provides audit materials to organizations conducting compliance tests using EPA Test Methods 6 (SO2), 7 (NOx), 18 (organics by GC/FID), 25 (organics as ppm C), and 26 (HCl). hese audit samples must be analyzed and the results reporte...
STATISTICAL ANALYSIS OF STATIONARY SOURCE COMPLIANCE TEST AUDIT RESULTS
The U.S. Environmental Protection Agency (EPA) provides audit materials to organizations conducting compliance tests using EPA Test Methods 6 (SO2), 7 (NOX), 18 (organics by GC/FID), 25 (organics as ppm C), 106 (vinyl chloride) and 26(HCl) and those organizations conducting trial...
Evaluation of a New Mean Scaled and Moment Adjusted Test Statistic for SEM
ERIC Educational Resources Information Center
Tong, Xiaoxiao; Bentler, Peter M.
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and 2 well-known robust test…
ERIC Educational Resources Information Center
Monterde-i-Bort, Hector; Frias-Navarro, Dolores; Pascual-Llobell, Juan
2010-01-01
The empirical study we present here deals with a pedagogical issue that has not been thoroughly explored up until now in our field. Previous empirical studies in other sectors have identified the opinions of researchers about this topic, showing that completely unacceptable interpretations have been made of significance tests and other statistical…
NASA Technical Reports Server (NTRS)
Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.
1983-01-01
The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.
ERIC Educational Resources Information Center
Simpson, Robert G.
1981-01-01
Occasionally, differences in test scores seem to indicate that a student performs much better in one reading area than in another when, in reality, the differences may not be statistically significant. The author presents a table in which statistically significant differences between Woodcock test standard scores are identified. (Author)
Detection of Invalid Test Scores: The Usefulness of Simple Nonparametric Statistics
ERIC Educational Resources Information Center
Tendeiro, Jorge N.; Meijer, Rob R.
2014-01-01
In recent guidelines for fair educational testing it is advised to check the validity of individual test scores through the use of person-fit statistics. For practitioners it is unclear on the basis of the existing literature which statistic to use. An overview of relatively simple existing nonparametric approaches to identify atypical response…
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
ERIC Educational Resources Information Center
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
ERIC Educational Resources Information Center
Norris, John M.
2015-01-01
Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…
The Importance of Invariance Procedures as against Tests of Statistical Significance.
ERIC Educational Resources Information Center
Fish, Larry
A growing controversy surrounds the strict interpretation of statistical significance tests in social research. Statistical significance tests fail in particular to provide estimates for the stability of research results. Methods that do provide such estimates are known as invariance or cross-validation procedures. Invariance analysis is largely…
A Review of Post-1994 Literature on Whether Statistical Significance Tests Should Be Banned.
ERIC Educational Resources Information Center
Sullivan, Jeremy R.
This paper summarizes the literature regarding statistical significance testing with an emphasis on: (1) the post-1994 literature in various disciplines; (2) alternatives to statistical significance testing; and (3) literature exploring why researchers have demonstrably failed to be influenced by the 1994 American Psychological Association…
The Michigan Alcoholism Screening Test (MAST): A Statistical Validation Analysis
ERIC Educational Resources Information Center
Laux, John M.; Newman, Isadore; Brown, Russ
2004-01-01
This study extends the Michigan Alcoholism Screening Test (MAST; M. L. Selzer, 1971) literature base by examining 4 issues related to the validity of the MAST scores. Specifically, the authors examine the validity of the MAST scores in light of the presence of impression management, participant demographic variables, and item endorsement…
Testing of hypotheses about altitude decompression sickness by statistical analyses
NASA Technical Reports Server (NTRS)
Van Liew, H. D.; Burkard, M. E.; Conkin, J.; Powell, M. R. (Principal Investigator)
1996-01-01
This communication extends a statistical analysis of forced-descent decompression sickness at altitude in exercising subjects (J Appl Physiol 1994; 76:2726-2734) with a data subset having an additional explanatory variable, rate of ascent. The original explanatory variables for risk-function analysis were environmental pressure of the altitude, duration of exposure, and duration of pure-O2 breathing before exposure; the best fit was consistent with the idea that instantaneous risk increases linearly as altitude exposure continues. Use of the new explanatory variable improved the fit of the smaller data subset, as indicated by log likelihood. Also, with ascent rate accounted for, replacement of the term for linear accrual of instantaneous risk by a term for rise and then decay made a highly significant improvement upon the original model (log likelihood increased by 37 log units). The authors conclude that a more representative data set and removal of the variability attributable to ascent rate allowed the rise-and-decay mechanism, which is expected from theory and observations, to become manifest.
Statistical analysis of shard and canister glass correlation test
Pulsipher, B.
1990-12-01
The vitrification facility at West Valley, New York will be used to incorporate nuclear waste into a vitrified waste form. Waste Acceptance Preliminary Specifications (WAPS) will be used to determine the acceptability of the waste form product. These specifications require chemical characterization of the waste form produced. West Valley Nuclear Services (WVNS) intends to characterize canister contents by obtaining shard samples from the top of the canisters prior to final sealing. A study was conducted to determine whether shard samples taken from the top of canisters filled with vitrified nuclear waste could be considered representative and therefore used to characterize the elemental composition of the entire canister contents. Three canisters produced during the SF-12 melter run conducted at WVNS were thoroughly sampled by core drilling at several axial and radial locations and by obtaining shard samples from the top of the canisters. Chemical analyses were performed and the resulting data were statistically analyzed by Pacific Northwest Laboratory (PNL). If one can assume that the process controls employed by WVNS during the SF-12 run are representative of those to be employed during future melter runs, shard samples can be used to characterize the canister contents. However, if batch-to-batch variations cannot be controlled to the acceptable levels observed from the SF-12 data, the representativeness of shard samples will be in question. The estimates of process and within-canister variations provided herein will prove valuable in determining the required frequency and number of shard samples to meet waste form qualification objectives.
A Statistical Test of Uniformity in Solar Cycle Indices
NASA Technical Reports Server (NTRS)
Hathaway David H.
2012-01-01
Several indices are used to characterize the solar activity cycle. Key among these are: the International Sunspot Number, the Group Sunspot Number, Sunspot Area, and 10.7 cm Radio Flux. A valuable aspect of these indices is the length of the record -- many decades and many (different) 11-year cycles. However, this valuable length-of-record attribute has an inherent problem in that it requires many different observers and observing systems. This can lead to non-uniformity in the datasets and subsequent erroneous conclusions about solar cycle behavior. The sunspot numbers are obtained by counting sunspot groups and individual sunspots on a daily basis. This suggests that the day-to-day and month-to-month variations in these numbers should follow Poisson Statistics and be proportional to the square-root of the sunspot numbers themselves. Examining the historical records of these indices indicates that this is indeed the case - even with Sunspot Area and 10.7 cm Radio Flux. The ratios of the RMS variations to the square-root of the indices themselves are relatively constant with little variation over the phase of each solar cycle or from small to large solar cycles. There are, however, important step-like changes in these ratios associated with changes in observer and/or observer system. Here we show how these variations can be used to construct more uniform datasets.
Spatial factors affecting statistical power in testing marine fauna displacement.
Pérez Lapeña, B; Wijnberg, K M; Stein, A; Hulscher, S J M H
2011-10-01
Impacts of offshore wind farms on marine fauna are largely unknown. Therefore, one commonly adheres to the precautionary principle, which states that one shall take action to avoid potentially damaging impacts on marine ecosystems, even when full scientific certainty is lacking. We implement this principle by means of a statistical power analysis including spatial factors. Implementation is based on geostatistical simulations, accommodating for zero-inflation in species data. We investigate scenarios in which an impact assessment still has to be carried out. Our results show that the environmental conditions at the time of the survey is the most influential factor on power. This is followed by survey effort and species abundance in the reference situation. Spatial dependence in species numbers at local scales affects power, but its effect is smaller for the scenarios investigated. Our findings can be used to improve effectiveness of the economical investment for monitoring surveys. In addition, unnecessary extra survey effort, and related costs, can be avoided when spatial dependence in species abundance is present and no improvement on power is achieved. PMID:22073657
Nonparametric statistical tests for the continuous data: the basic concept and the practical use.
Nahm, Francis Sahngun
2016-02-01
Conventional statistical tests are usually called parametric tests. Parametric tests are used more frequently than nonparametric tests in many medical articles, because most of the medical researchers are familiar with and the statistical software packages strongly support parametric tests. Parametric tests require important assumption; assumption of normality which means that distribution of sample means is normally distributed. However, parametric test can be misleading when this assumption is not satisfied. In this circumstance, nonparametric tests are the alternative methods available, because they do not required the normality assumption. Nonparametric tests are the statistical methods based on signs and ranks. In this article, we will discuss about the basic concepts and practical use of nonparametric tests for the guide to the proper use. PMID:26885295
Nonparametric statistical tests for the continuous data: the basic concept and the practical use
2016-01-01
Conventional statistical tests are usually called parametric tests. Parametric tests are used more frequently than nonparametric tests in many medical articles, because most of the medical researchers are familiar with and the statistical software packages strongly support parametric tests. Parametric tests require important assumption; assumption of normality which means that distribution of sample means is normally distributed. However, parametric test can be misleading when this assumption is not satisfied. In this circumstance, nonparametric tests are the alternative methods available, because they do not required the normality assumption. Nonparametric tests are the statistical methods based on signs and ranks. In this article, we will discuss about the basic concepts and practical use of nonparametric tests for the guide to the proper use. PMID:26885295
Development and performances of a high statistics PMT test facility
NASA Astrophysics Data System (ADS)
Maximiliano Mollo, Carlos
2016-04-01
Since almost a century photomultipliers have been the main sensors for photon detection in nuclear and astro-particle physics experiments. In recent years the search for cosmic neutrinos gave birth to enormous size experiments (Antares, Kamiokande, Super-Kamiokande, etc.) and even kilometric scale experiments as ICECUBE and the future KM3NeT. A very large volume neutrino telescope like KM3NeT requires several hundreds of thousands photomultipliers. The performance of the telescope strictly depends on the performance of each PMT. For this reason, it is mandatory to measure the characteristics of each single sensor. The characterization of a PMT normally requires more than 8 hours mostly due to the darkening step. This means that it is not feasible to measure the parameters of each PMT of a neutrino telescope without a system able to test more than one PMT simultaneously. For this application, we have designed, developed and realized a system able to measure the main characteristics of 62 3-inch photomultipliers simultaneously. Two measurement sessions per day are possible. In this work, we describe the design constraints and how they have been satisfied. Finally, we show the performance of the system and the first results coming from the test of few thousand tested PMTs.
1993-02-01
In 1984, 99% of abortions conducted in Bombay, India, were of female fetuses. In 1986-87, 30,000-50,000 female fetuses were aborted in India. In 1987-88, 7 Delhi clinics conducted 13,000 sex determination tests. Thus, discrimination against females begins before birth in India. Some states (Maharashtra, Goa, and Gujarat) have drafted legislation to prevent the use of prenatal diagnostic tests (e.g., ultrasonography) for sex determination purposes. Families make decisions about an infant's nutrition based on the infant's sex so it is not surprising to see a higher incidence of morbidity among girls than boys (e.g., for respiratory infections in 1985, 55.5% vs. 27.3%). Consequently, they are more likely to die than boys. Even though vasectomy is simpler and safer than tubectomy, the government promotes female sterilizations. The percentage of all sexual sterilizations being tubectomy has increased steadily from 84% to 94% (1986-90). Family planning programs focus on female contraceptive methods, despite the higher incidence of adverse health effects from female methods (e.g., IUD causes pain and heavy bleeding). Some women advocates believe the effects to be so great that India should ban contraceptives and injectable contraceptives. The maternal mortality rate is quite high (460/100,000 live births), equaling a lifetime risk of 1:18 of a pregnancy-related death. 70% of these maternal deaths are preventable. Leading causes of maternal deaths in India are anemia, hemorrhage, eclampsia, sepsis, and abortion. Most pregnant women do not receive prenatal care. Untrained personnel attend about 70% of deliveries in rural areas and 29% in urban areas. Appropriate health services and other interventions would prevent the higher age specific death rates for females between 0 and 35 years old. Even though the government does provide maternal and child health services, it needs to stop decreasing resource allocate for health and start increasing it. PMID:12286355
Statistic Tests Aided Multi-Source dem Fusion
NASA Astrophysics Data System (ADS)
Fu, C. Y.; Tsay, J. R.
2016-06-01
Since the land surface has been changing naturally or manually, DEMs have to be updated continually to satisfy applications using the latest DEM at present. However, the cost of wide-area DEM production is too high. DEMs, which cover the same area but have different quality, grid sizes, generation time or production methods, are called as multi-source DEMs. It provides a solution to fuse multi-source DEMs for low cost DEM updating. The coverage of DEM has to be classified according to slope and visibility in advance, because the precisions of DEM grid points in different areas with different slopes and visibilities are not the same. Next, difference DEM (dDEM) is computed by subtracting two DEMs. It is assumed that dDEM, which only contains random error, obeys normal distribution. Therefore, student test is implemented for blunder detection and three kinds of rejected grid points are generated. First kind of rejected grid points is blunder points and has to be eliminated. Another one is the ones in change areas, where the latest data are regarded as their fusion result. Moreover, the DEM grid points of type I error are correct data and have to be reserved for fusion. The experiment result shows that using DEMs with terrain classification can obtain better blunder detection result. A proper setting of significant levels (α) can detect real blunders without creating too many type I errors. Weighting averaging is chosen as DEM fusion algorithm. The priori precisions estimated by our national DEM production guideline are applied to define weights. Fisher's test is implemented to prove that the priori precisions correspond to the RMSEs of blunder detection result.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2006-01-01
A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
Evaluation of heart failure biomarker tests: a survey of statistical considerations.
De, Arkendra; Meier, Kristen; Tang, Rong; Li, Meijuan; Gwise, Thomas; Gomatam, Shanti; Pennello, Gene
2013-08-01
Biomarkers assessing cardiovascular function can encompass a wide range of biochemical or physiological measurements. Medical tests that measure biomarkers are typically evaluated for measurement validation and clinical performance in the context of their intended use. General statistical principles for the evaluation of medical tests are discussed in this paper in the context of heart failure. Statistical aspects of study design and analysis to be considered while assessing the quality of measurements and the clinical performance of tests are highlighted. A discussion of statistical considerations for specific clinical uses is also provided. The remarks in this paper mainly focus on methods and considerations for statistical evaluation of medical tests from the perspective of bias and precision. With such an evaluation of performance, healthcare professionals could have information that leads to a better understanding on the strengths and limitations of tests related to heart failure. PMID:23670231
New Statistics for Testing Differential Expression of Pathways from Microarray Data
NASA Astrophysics Data System (ADS)
Siu, Hoicheong; Dong, Hua; Jin, Li; Xiong, Momiao
Exploring biological meaning from microarray data is very important but remains a great challenge. Here, we developed three new statistics: linear combination test, quadratic test and de-correlation test to identify differentially expressed pathways from gene expression profile. We apply our statistics to two rheumatoid arthritis datasets. Notably, our results reveal three significant pathways and 275 genes in common in two datasets. The pathways we found are meaningful to uncover the disease mechanisms of rheumatoid arthritis, which implies that our statistics are a powerful tool in functional analysis of gene expression data.
Mnemonic Aids during Tests: Worthless Frivolity or Effective Tool in Statistics Education?
ERIC Educational Resources Information Center
Larwin, Karen H.; Larwin, David A.; Gorman, Jennifer
2012-01-01
Researchers have explored many pedagogical approaches in an effort to assist students in finding understanding and comfort in required statistics courses. This study investigates the impact of mnemonic aids used during tests on students' statistics course performance in particular. In addition, the present study explores several hypotheses that…
ERIC Educational Resources Information Center
LeMire, Steven D.
2010-01-01
This paper proposes an argument framework for the teaching of null hypothesis statistical testing and its application in support of research. Elements of the Toulmin (1958) model of argument are used to illustrate the use of p values and Type I and Type II error rates in support of claims about statistical parameters and subject matter research…
Evaluation of Small-Sample Statistics that Test Whether Variables Measure the Same Trait.
ERIC Educational Resources Information Center
Rasmussen, Jeffrey Lee
1988-01-01
The performance was studied of five small-sample statistics--by F. M. Lord, W. Kristof, Q. McNemar, R. A. Forsyth and L. S. Feldt, and J. P. Braden--that test whether two variables measure the same trait except for measurement error. Effects of non-normality were investigated. The McNemar statistic was most powerful. (TJH)
Some statistical and regulatory issues in the evaluation of genetic and genomic tests.
Campbell, Gregory
2004-08-01
The genomics revolution is reverberating throughout the worlds of pharmaceutical drugs, genetic testing and statistical science. This revolution, which uses single nucleotide polymorphisms (SNPs) and gene expression technology, including cDNA and oligonucleotide microarrays, for a range of tests from home-brews to high-complexity lab kits, can allow the selection or exclusion of patients for therapy (responders or poor metabolizers). The wide variety of US regulatory mechanisms for these tests is discussed. Clinical studies to evaluate the performance of such tests need to follow statistical principles for sound diagnostic test design. Statistical methodology to evaluate such studies can be wide ranging, including receiver operating characteristic (ROC) methodology, logistic regression, discriminant analysis, multiple comparison procedures resampling, Bayesian hierarchical modeling, recursive partitioning, as well as exploratory techniques such as data mining. Recent examples of approved genetic tests are discussed. PMID:15468751
Comment on the asymptotics of a distribution-free goodness of fit test statistic.
Browne, Michael W; Shapiro, Alexander
2015-03-01
In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them. PMID:24306556
Fidler, Fiona; Burgman, Mark A; Cumming, Geoff; Buttrose, Robert; Thomason, Neil
2006-10-01
Over the last decade, criticisms of null-hypothesis significance testing have grown dramatically, and several alternative practices, such as confidence intervals, information theoretic, and Bayesian methods, have been advocated. Have these calls for change had an impact on the statistical reporting practices in conservation biology? In 2000 and 2001, 92% of sampled articles in Conservation Biology and Biological Conservation reported results of null-hypothesis tests. In 2005 this figure dropped to 78%. There were corresponding increases in the use of confidence intervals, information theoretic, and Bayesian techniques. Of those articles reporting null-hypothesis testing--which still easily constitute the majority--very few report statistical power (8%) and many misinterpret statistical nonsignificance as evidence for no effect (63%). Overall, results of our survey show some improvements in statistical practice, but further efforts are clearly required to move the discipline toward improved practices. PMID:17002771
An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models
ERIC Educational Resources Information Center
Liu, Yanlou; Tian, Wei; Xin, Tao
2016-01-01
The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…
Evaluating Two Models of Collaborative Tests in an Online Introductory Statistics Course
ERIC Educational Resources Information Center
Björnsdóttir, Auðbjörg; Garfield, Joan; Everson, Michelle
2015-01-01
This study explored the use of two different types of collaborative tests in an online introductory statistics course. A study was designed and carried out to investigate three research questions: (1) What is the difference in students' learning between using consensus and non-consensus collaborative tests in the online environment?, (2) What is…
What Are Null Hypotheses? The Reasoning Linking Scientific and Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Lawson, Anton E.
2008-01-01
We should dispense with use of the confusing term "null hypothesis" in educational research reports. To explain why the term should be dropped, the nature of, and relationship between, scientific and statistical hypothesis testing is clarified by explication of (a) the scientific reasoning used by Gregor Mendel in testing specific…
ERIC Educational Resources Information Center
White, Desley
2015-01-01
Two practical activities are described, which aim to support critical thinking about statistics as they concern multiple outcomes testing. Formulae are presented in Microsoft Excel spreadsheets, which are used to calculate the inflation of error associated with the quantity of tests performed. This is followed by a decision-making exercise, where…
A Statistical Framework for Assessment Using the ETS Major Field Test in Business
ERIC Educational Resources Information Center
Settlage, Daniel Murray; Settlage, Latisha Ann
2011-01-01
The Major Field Test in Business (MFT-B) is a widely used end-of-program assessment tool; however, several challenges arise when using the test in this capacity. Changing student demographics and the lack of a statistical framework are two of the most vexing issues confronting educators when using the MFT-B for programmatic assessment. The authors…
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
ERIC Educational Resources Information Center
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness-of-fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square, but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's (1984) asymptotically distribution-free method and Satorra Bentler's…
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
ERIC Educational Resources Information Center
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
Statistical Techniques for Criterion-Referenced Tests. Final Report. October, 1976-October, 1977.
ERIC Educational Resources Information Center
Wilcox, Rand R.
Three statistical problems related to criterion-referenced testing are investigated: estimation of the likelihood of a false-positive or false-negative decision with a mastery test, estimation of true scores in the Compound Binomial Error Model, and comparison of the examinees to a control. Two methods for estimating the likelihood of…
Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Fabrycky, Daniel C.; Holman, Matthew J.; Welsh, William F.; Borucki, William J.; Batalha, Natalie M.; Bryson, Steve; Caldwell, Douglas A.; Ciardi, David R.; /Caltech /NASA, Ames /SETI Inst., Mtn. View
2012-01-01
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through Quarter six (Q6) of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.
Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Jenkins, Jon M.; Koch, David G.; Sanderfer, Dwight T.; Seader, Shawn; Twicken, Joseph D.; Fabrycky, Daniel C.; Welsh, William F.; Batalha, Natalie M.; Ciardi, David R.; Prsa, Andrej
2012-09-10
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
On the power for linkage detection using a test based on scan statistics.
Hernández, Sonia; Siegmund, David O; de Gunst, Mathisca
2005-04-01
We analyze some aspects of scan statistics, which have been proposed to help for the detection of weak signals in genetic linkage analysis. We derive approximate expressions for the power of a test based on moving averages of the identity by descent allele sharing proportions for pairs of relatives at several contiguous markers. We confirm these approximate formulae by simulation. The results show that when there is a single trait-locus on a chromosome, the test based on the scan statistic is slightly less powerful than that based on the customary allele sharing statistic. On the other hand, if two genes having a moderate effect on a trait lie close to each other on the same chromosome, scan statistics improve power to detect linkage. PMID:15772104
Modified H-statistic with adaptive Winsorized mean in two groups test
NASA Astrophysics Data System (ADS)
Teh, Kian Wooi; Abdullah, Suhaida; Yahaya, Sharipah Soaad Syed; Yusof, Zahayu Md
2014-06-01
t-test is a commonly used test statistics when comparing two independent groups. The computation of this test is simple yet it is powerful under normal distribution and equal variance dataset. However, in real life data, sometimes it is hard to get dataset which has this package. The violation of assumptions (normality and equal variances) will give the devastating effect on the Type I error rate control to the t-test. On the same time, the statistical power also will be reduced. Therefore in this study, the adaptive Winsorised mean with hinge estimator in H-statistic (AWM-H) is proposed. The H-statistic is one of the robust statistics that able to handle the problem of nonnormality in comparing independent group. This procedure originally used Modified One-step M (MOM) estimator which employed trimming process. In the AWM-H procedure, the MOM estimator is replaced with the adaptive Winsorized mean (AWM) as the central tendency measure of the test. The Winsorization process is based on hinge estimator HQ or HQ1. Overall results showed that the proposed method performed better than the original method and the classical method especially under heavy tailed distribution.
A NEW TEST OF THE STATISTICAL NATURE OF THE BRIGHTEST CLUSTER GALAXIES
Lin, Yen-Ting; Ostriker, Jeremiah P.; Miller, Christopher J.
2010-06-01
A novel statistic is proposed to examine the hypothesis that all cluster galaxies are drawn from the same luminosity distribution (LD). In such a 'statistical model' of galaxy LD, the brightest cluster galaxies (BCGs) are simply the statistical extreme of the galaxy population. Using a large sample of nearby clusters, we show that BCGs in high luminosity clusters (e.g., L {sub tot} {approx}> 4 x 10{sup 11} h {sup -2} {sub 70} L {sub sun}) are unlikely (probability {<=}3 x 10{sup -4}) to be drawn from the LD defined by all red cluster galaxies more luminous than M{sub r} = -20. On the other hand, BCGs in less luminous clusters are consistent with being the statistical extreme. Applying our method to the second brightest galaxies, we show that they are consistent with being the statistical extreme, which implies that the BCGs are also distinct from non-BCG luminous, red, cluster galaxies. We point out some issues with the interpretation of the classical tests proposed by Tremaine and Richstone (TR) that are designed to examine the statistical nature of BCGs, investigate the robustness of both our statistical test and those of TR against difficulties in photometry of galaxies of large angular size, and discuss the implication of our findings on surveys that use the luminous red galaxies to measure the baryon acoustic oscillation features in the galaxy power spectrum.
Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.
Greenland, Sander; Senn, Stephen J; Rothman, Kenneth J; Carlin, John B; Poole, Charles; Goodman, Steven N; Altman, Douglas G
2016-04-01
Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting. PMID:27209009
NASA Astrophysics Data System (ADS)
Woessner, J.; Schorlemmer, D.; Wiemer, S.; Mai, P. M.
2005-12-01
Quantitatively correlating properties of finite-fault source models with hypocenters of aftershocks may provide new insight in the relationship between either slip or static stress change distributions and aftershock occurrence. We present advanced non-standard statistical test approaches to evaluate the test hypotheses (1) if aftershocks are preferentially located in areas of low slip and (2) if aftershocks are located in increased shear stress against the null hypothesis: aftershocks are located randomly on the fault plane. By using multiple test approaches, we investigate possible pitfalls and the information content of statistical testing. To perform the tests, we use earthquakes for which multiple finite-fault source models and earthquake catalogs of varying accuracy exist. The aftershock hypocenters are projected onto the main-shock rupture plane and uncertainties are accounted for by simulating hypocenter locations in the given error bounds. For the statistical tests, we retain the spatial clustering of earthquakes as the most important observed features of seismicity and synthesize random slip distributions with different approaches: first, using standard statistical methods that randomize the obtained finite-fault source model values and second, using a random spatial field model. We then determine the number of aftershocks in low-slip or increased shear-stress regions for simulated slip distributions, and compare those to the measurements obtained for finite-source slip inversions. We apply the tests to prominent earthquakes in California and Japan and find statistical significant evidence that aftershocks are preferentially located in low-slip regions. The tests, however, show a lower significance for the correlation with the shear-stress distribution, but are in general agreement with the expectations of the asperity model. Tests using the hypocenters of relocated catalogs show higher significances.
New Statistical Tests of Neutrality for DNA Samples from a Population
Fu, Y. X.
1996-01-01
The purpose of this paper is to develop statistical tests of the neutral model of evolution against a class of alternative models with the common characteristic of having an excess of mutations that occurred a long time ago or a reduction of recent mutations compared to the neutral model. This class of population genetics models include models for structured populations, models with decreasing effective population size and models of selection and mutation balance. Four statistical tests were proposed in this paper for DNA samples from a population. Two of these tests, one new and another a modification of an existing test, are based on EWENS' sampling formula, and the other two new tests make use of the frequencies of mutations of various classes. Using simulated samples and regression analyses, the critical values of these tests can be computed from regression equations. This approach for computing the critical values of a test was found to be appropriate and quite effective. We examined the powers of these four tests using simulated samples from structured populations, populations with linearly decreasing sizes and models of selection and mutation balance and found that they are more powerful than existing statistical tests of the neutral model of evolution. PMID:8722804
Statistical hypothesis testing by weak-value amplification: Proposal and evaluation
NASA Astrophysics Data System (ADS)
Susa, Yuki; Tanaka, Saki
2015-07-01
We study the detection capability of the weak-value amplification on the basis of the statistical hypothesis testing. We propose a reasonable testing method in the physical and statistical senses to find that the weak measurement with the large weak value has the advantage to increase the detection power and to reduce the possibility of missing the presence of interaction. We enhance the physical understanding of the weak value and mathematically establish the significance of the weak-value amplification. Our present work overcomes the critical dilemma of the weak-value amplification that the larger the amplification is, the smaller the number of data becomes, because the statistical hypothesis testing works even for a small number of data. This is contrasted with the parameter estimation by the weak-value amplification in the literature which requires a large number of data.
Festing, Michael F W
2014-12-01
The results of repeat-dose toxicity tests are usually presented as tables of means and standard deviations (SDs), with an indication of statistical significance for each biomarker. Interpretation is based mainly on the pattern of statistical significance rather than the magnitude of any response. Multiple statistical testing of many biomarkers leads to false-positive results and, with the exception of growth data, few graphical methods for showing the results are available. By converting means and SDs to standardized effect sizes, a range of graphical techniques including dot plots, line plots, box plots, and quantile-quantile plots become available to show the patterns of response. A bootstrap statistical test involving all biomarkers is proposed to compare the magnitudes of the response between treated groups. These methods are proposed as an extension rather than an alternative to current statistical analyses. They can be applied to published work retrospectively, as all that is required is tables of means and SDs. The methods are illustrated using published articles, where the results range from strong positive to completely negative responses to the test substances. PMID:24487356
New advances in methodology for statistical tests useful in geostatistical studies
Borgman, L.E.
1988-05-01
Methodology for statistical procedures to perform tests of hypothesis pertaining to various aspects of geostatistical investigations has been slow in developing. The correlated nature of the data precludes most classical tests and makes the design of new tests difficult. Recent studies have led to modifications of the classical t test which allow for the intercorrelation. In addition, results for certain nonparametric tests have been obtained. The conclusions of these studies provide a variety of new tools for the geostatistician in deciding questions on significant differences and magnitudes.
ENHANCING TEST SENSITIVITY IN TOXICITY TESTING BY USING A STATISTICAL PERFORMANCE STANDARD
Previous reports have shown that within-test sensitivity can vary markedly among laboratories. Experts have advocated an empirical approach to controlling test variability based on the MSD, control means, and other test acceptability criteria. (The MSD represents the smallest dif...
Can Percentiles Replace Raw Scores in the Statistical Analysis of Test Data?
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Zumbo, Bruno D.
2005-01-01
Educational and psychological testing textbooks typically warn of the inappropriateness of performing arithmetic operations and statistical analysis on percentiles instead of raw scores. This seems inconsistent with the well-established finding that transforming scores to ranks and using nonparametric methods often improves the validity and power…
Alphas and Asterisks: The Development of Statistical Significance Testing Standards in Sociology
ERIC Educational Resources Information Center
Leahey, Erin
2005-01-01
In this paper, I trace the development of statistical significance testing standards in sociology by analyzing data from articles published in two prestigious sociology journals between 1935 and 2000. I focus on the role of two key elements in the diffusion literature, contagion and rationality, as well as the role of institutional factors. I…
ERIC Educational Resources Information Center
Ho, Andrew D.; Yu, Carol C.
2015-01-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564
The Effect of Clustering on Statistical Tests: An Illustration Using Classroom Environment Data
ERIC Educational Resources Information Center
Dorman, Jeffrey Paul
2008-01-01
This paper discusses the effect of clustering on statistical tests and illustrates this effect using classroom environment data. Most classroom environment studies involve the collection of data from students nested within classrooms and the hierarchical nature to these data cannot be ignored. In particular, this paper studies the influence of…
Connecting Science and Mathematics: The Nature of Scientific and Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Lawson, Anton E.; Oehrtman, Michael; Jensen, Jamie
2008-01-01
Confusion persists concerning the roles played by scientific hypotheses and predictions in doing science. This confusion extends to the nature of scientific and statistical hypothesis testing. The present paper utilizes the "If/and/then/Therefore" pattern of hypothetico-deductive (HD) reasoning to explicate the nature of both scientific and…
A Short-Cut Statistic for Item Analysis of Mastery Tests: A Comparison of Three Procedures.
ERIC Educational Resources Information Center
Subkoviak, Michael J.; Harris, Deborah J.
This study examined three statistical methods for selecting items for mastery tests. One is the pretest-posttest method due to Cox and Vargas (1966); it is computationally simple, but has a number of serious limitations. The second is a latent trait method recommended by van der Linden (1981); it is computationally complex, but has a number of…
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
ERIC Educational Resources Information Center
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
ERIC Educational Resources Information Center
Lawrence, John A.; Singhania, Ram P.
2004-01-01
In this investigation of student performance in introductory business statistics classes, the authors performed two separate controlled studies to compare performance in (a) distance-learning versus traditionally delivered courses and (b) multiple choice versus problem-solving tests. Results of the first study, based on the authors' several…
Diagnosing Skills of Statistical Hypothesis Testing Using the Rule Space Method
ERIC Educational Resources Information Center
Im, Seongah; Yin, Yue
2009-01-01
This study illustrated the use of the Rule Space Method to diagnose students' proficiencies in, skills and knowledge of statistical hypothesis testing. Participants included 96 undergraduate and, graduate students, of whom 94 were classified into one or more of the knowledge states identified by, the rule space analysis. Analysis at the level of…
Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.
ERIC Educational Resources Information Center
Deegear, James
This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…
Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
ERIC Educational Resources Information Center
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate unless "corrected" effect…
Candini, Giancarlo
2004-12-01
In the fields of didactics and continuous professional development (CPD) plans, the increasing use of multiple answer tests for the evaluation of the level of knowledge in various kinds of subjects makes it increasingly important to have reliable and effective tools for data processing and for the evaluation of the results. The aim of the present work is to explore a new methodological approach based on a widely tested statistical analysis able to yield more information content when compared with the traditional methods. With this purpose we suggest a Graduated Response Test and the relative operating characteristic curve (ROC) for the evaluation of the results. A short description of a computerized procedure, written in Visual Basic Pro (v.6.0), which automatically performs the statistical analysis, the ROC curves plot and the calculation of a learning index is given as well. PMID:15518651
Halpin, Peter F; Stam, Henderikus J
2006-01-01
The application of statistical testing in psychological research over the period of 1940-1960 is examined in order to address psychologists' reconciliation of the extant controversy between the Fisher and Neyman-Pearson approaches. Textbooks of psychological statistics and the psychological journal literature are reviewed to examine the presence of what Gigerenzer (1993) called a hybrid model of statistical testing. Such a model is present in the textbooks, although the mathematically incomplete character of this model precludes the appearance of a similarly hybridized approach to statistical testing in the research literature. The implications of this hybrid model for psychological research and the statistical testing controversy are discussed. PMID:17286092
Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems
NASA Astrophysics Data System (ADS)
Gilliam, David M.
2011-06-01
Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.
Detecting trends in raptor counts: power and type I error rates of various statistical tests
Hatfield, J.S.; Gould, W.R., IV; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.
1996-01-01
We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.
Rudd, James; Moore, Jason H.; Urbanowicz, Ryan J.
2013-01-01
Permutation-based statistics for evaluating the significance of class prediction, predictive attributes, and patterns of association have only appeared within the learning classifier system (LCS) literature since 2012. While still not widely utilized by the LCS research community, formal evaluations of test statistic confidence are imperative to large and complex real world applications such as genetic epidemiology where it is standard practice to quantify the likelihood that a seemingly meaningful statistic could have been obtained purely by chance. LCS algorithms are relatively computationally expensive on their own. The compounding requirements for generating permutation-based statistics may be a limiting factor for some researchers interested in applying LCS algorithms to real world problems. Technology has made LCS parallelization strategies more accessible and thus more popular in recent years. In the present study we examine the benefits of externally parallelizing a series of independent LCS runs such that permutation testing with cross validation becomes more feasible to complete on a single multi-core workstation. We test our python implementation of this strategy in the context of a simulated complex genetic epidemiological data mining problem. Our evaluations indicate that as long as the number of concurrent processes does not exceed the number of CPU cores, the speedup achieved is approximately linear. PMID:24358057
NASA Astrophysics Data System (ADS)
Coelho, Carlos A.; Marques, Filipe J.
2013-09-01
In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.
NASA Astrophysics Data System (ADS)
Duari, Debiprosad; Gupta, Patrick D.; Narlikar, Jayant V.
1992-01-01
An overview of statistical tests of peaks and periodicities in the redshift distribution of quasi-stellar objects is presented. The tests include the power-spectrum analysis carried out by Burbidge and O'Dell (1972), the generalized Rayleigh test, the Kolmogorov-Smirnov test, and the 'comb-tooth' test. The tests reveal moderate to strong evidence for periodicities of 0.0565 and 0.0127-0.0129. The confidence level of the periodicity of 0.0565 in fact marginally increases when redshifts are transformed to the Galactocentric frame. The same periodicity, first noticed in 1968, persists to date with a QSO population that has since grown about 30 times its original size. The prima facie evidence for periodicities in 1n(1 + z) is found to be of no great significance.
Accuracy of Estimates and Statistical Power for Testing Meditation in Latent Growth Curve Modeling
Cheong, JeeWon
2016-01-01
The latent growth curve modeling (LGCM) approach has been increasingly utilized to investigate longitudinal mediation. However, little is known about the accuracy of the estimates and statistical power when mediation is evaluated in the LGCM framework. A simulation study was conducted to address these issues under various conditions including sample size, effect size of mediated effect, number of measurement occasions, and R2 of measured variables. In general, the results showed that relatively large samples were needed to accurately estimate the mediated effects and to have adequate statistical power, when testing mediation in the LGCM framework. Guidelines for designing studies to examine longitudinal mediation and ways to improve the accuracy of the estimates and statistical power were discussed.
Using global statistical tests in long-term Parkinson's disease clinical trials.
Huang, Peng; Goetz, Christopher G; Woolson, Robert F; Tilley, Barbara; Kerr, Douglas; Palesch, Yuko; Elm, Jordan; Ravina, Bernard; Bergmann, Kenneth J; Kieburtz, Karl
2009-09-15
Parkinson's disease (PD) impairments are multidimensional, making it difficult to choose a single primary outcome when evaluating treatments to stop or lessen the long-term decline in PD. We review commonly used multivariate statistical methods for assessing a treatment's global impact, and we highlight the novel Global Statistical Test (GST) methodology. We compare the GST to other multivariate approaches using data from two PD trials. In one trial where the treatment showed consistent improvement on all primary and secondary outcomes, the GST was more powerful than other methods in demonstrating significant improvement. In the trial where treatment induced both improvement and deterioration in key outcomes, the GST failed to demonstrate statistical evidence even though other techniques showed significant improvement. Based on the statistical properties of the GST and its relevance to overall treatment benefit, the GST appears particularly well suited for a disease like PD where disability and impairment reflect dysfunction of diverse brain systems and where both disease and treatment side effects impact quality of life. In future long term trials, use of GST for primary statistical analysis would allow the assessment of clinically relevant outcomes rather than the artificial selection of a single primary outcome. PMID:19514076
Using Global Statistical Tests in Long-Term Parkinson’s Disease Clinical Trials
Huang, Peng; Goetz, Christopher G.; Woolson, Robert F.; Tilley, Barbara; Kerr, Douglas; Palesch, Yuko; Elm, Jordan; Ravina, Bernard; Bergmann, Kenneth J.; Kieburtz, Karl
2010-01-01
Parkinson’s disease (PD) impairments are multidimensional, making it difficult to choose a single primary outcome when evaluating treatments to stop or lessen the long-term decline in PD. We review commonly used multivariate statistical methods for assessing a treatment’s global impact, and we highlight the novel Global Statistical Test (GST) methodology. We compare the GST to other multivariate approaches using data from two PD trials. In one trial where the treatment showed consistent improvement on all primary and secondary outcomes, the GST was more powerful than other methods in demonstrating significant improvement. In the trial where treatment induced both improvement and deterioration in key outcomes, the GST failed to demonstrate statistical evidence even though other techniques showed significant improvement. Based on the statistical properties of the GST and its relevance to overall treatment benefit, the GST appears particularly well suited for a disease like PD where disability and impairment reflect dysfunction of diverse brain systems and where both disease and treatment side effects impact quality of life. In future long term trials, use of GST for primary statistical analysis would allow the assessment of clinically relevant outcomes rather than the artificial selection of a single primary outcome. PMID:19514076
Taroni, F; Biedermann, A; Bozza, S
2016-02-01
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. PMID:26743713
2014-01-01
Background Under a Markov model of evolution, recoding, or lumping, of the four nucleotides into fewer groups may permit analysis under simpler conditions but may unfortunately yield misleading results unless the evolutionary process of the recoded groups remains Markovian. If a Markov process is lumpable, then the evolutionary process of the recoded groups is Markovian. Results We consider stationary, reversible, and homogeneous Markov processes on two taxa and compare three tests for lumpability: one using an ad hoc test statistic, which is based on an index that is evaluated using a bootstrap approximation of its distribution; one that is based on a test proposed specifically for Markov chains; and one using a likelihood-ratio test. We show that the likelihood-ratio test is more powerful than the index test, which is more powerful than that based on the Markov chain test statistic. We also show that for stationary processes on binary trees with more than two taxa, the tests can be applied to all pairs. Finally, we show that if the process is lumpable, then estimates obtained under the recoded model agree with estimates obtained under the original model, whereas, if the process is not lumpable, then these estimates can differ substantially. We apply the new likelihood-ratio test for lumpability to two primate data sets, one with a mitochondrial origin and one with a nuclear origin. Conclusions Recoding may result in biased phylogenetic estimates because the original evolutionary process is not lumpable. Accordingly, testing for lumpability should be done prior to phylogenetic analysis of recoded data. PMID:24564837
NASA Astrophysics Data System (ADS)
Michael, A. J.
2012-12-01
Detecting trends in the rate of sporadic events is a problem for earthquakes and other natural hazards such as storms, floods, or landslides. I use synthetic events to judge the tests used to address this problem in seismology and consider their application to other hazards. Recent papers have analyzed the record of magnitude ≥7 earthquakes since 1900 and concluded that the events are consistent with a constant rate Poisson process plus localized aftershocks (Michael, GRL, 2011; Shearer and Stark, PNAS, 2012; Daub et al., GRL, 2012; Parsons and Geist, BSSA, 2012). Each paper removed localized aftershocks and then used a different suite of statistical tests to test the null hypothesis that the remaining data could be drawn from a constant rate Poisson process. The methods include KS tests between event times or inter-event times and predictions from a Poisson process, the autocorrelation function on inter-event times, and two tests on the number of events in time bins: the Poisson dispersion test and the multinomial chi-square test. The range of statistical tests gives us confidence in the conclusions; which are robust with respect to the choice of tests and parameters. But which tests are optimal and how sensitive are they to deviations from the null hypothesis? The latter point was raised by Dimer (arXiv, 2012), who suggested that the lack of consideration of Type 2 errors prevents these papers from being able to place limits on the degree of clustering and rate changes that could be present in the global seismogenic process. I produce synthetic sets of events that deviate from a constant rate Poisson process using a variety of statistical simulation methods including Gamma distributed inter-event times and random walks. The sets of synthetic events are examined with the statistical tests described above. Preliminary results suggest that with 100 to 1000 events, a data set that does not reject the Poisson null hypothesis could have a variability that is 30% to
Statistical correlation analysis for comparing vibration data from test and analysis
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
A statistical method for assessing network stability using the Chow test.
Sotirakopoulos, Kostas; Barham, Richard; Piper, Ben; Nencini, Luca
2015-10-01
A statistical method is proposed for the assessment of stability in noise monitoring networks. The technique makes use of a variation of the Chow test applied between multiple measurement nodes placed at different locations and its novelty lies in the way it utilises a simple statistical test based on linear regression to uncover complex issues that can be difficult to expose otherwise. Measurements collected by a noise monitoring network deployed in the center of Pisa are used to demonstrate the capabilities and limitations of the test. It is shown that even in urban environments, where great soundscape variations are exhibited, accurate and robust results can be produced regardless of the proximity of the compared sensors as long as they are located in acoustically similar environments. Also it is shown that variations of the same method can be applied for self-testing on data collected by single stations. Finally it is presented that the versatility of the test makes it suitable for detection of various types of issues that can occur in real life network implementations; from slow drifts away from calibration, to severe, abrupt failures and noise floor shifts. PMID:26370835
Case Studies for the Statistical Design of Experiments Applied to Powered Rotor Wind Tunnel Tests
NASA Technical Reports Server (NTRS)
Overmeyer, Austin D.; Tanner, Philip E.; Martin, Preston B.; Commo, Sean A.
2015-01-01
The application of statistical Design of Experiments (DOE) to helicopter wind tunnel testing was explored during two powered rotor wind tunnel entries during the summers of 2012 and 2013. These tests were performed jointly by the U.S. Army Aviation Development Directorate Joint Research Program Office and NASA Rotary Wing Project Office, currently the Revolutionary Vertical Lift Project, at NASA Langley Research Center located in Hampton, Virginia. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small portion of the overall tests devoted to developing case studies of the DOE approach as it applies to powered rotor testing. A 16-47 times reduction in the number of data points required was estimated by comparing the DOE approach to conventional testing methods. The average error for the DOE surface response model for the OH-58F test was 0.95 percent and 4.06 percent for drag and download, respectively. The DOE surface response model of the Active Flow Control test captured the drag within 4.1 percent of measured data. The operational differences between the two testing approaches are identified, but did not prevent the safe operation of the powered rotor model throughout the DOE test matrices.
Symmetry of the CMB sky as a new test of its statistical isotropy. Non cosmological octupole?
Naselsky, P.; Hansen, M.; Kim, J. E-mail: kirstejn@nbi.dk
2011-09-01
In this article we propose a novel test for statistical anisotropy of the CMB ΔT( n-circumflex = (θ,φ)). The test is based on the fact, that the Galactic foregrounds have a remarkably strong symmetry with respect to their antipodal points with respect to the Galactic plane, while the cosmological signal should not be symmetric or asymmetric under these transitions. We have applied the test for the octupole component of the WMAP ILC 7 map, by looking at a{sub 3,1} and a{sub 3,3}, and their ratio to a{sub 3,2} both for real and imaginary values. We find abnormal symmetry of the octupole component at the level of 0.58%, compared to Monte Carlo simulations. By using the analysis of the phases of the octupole we found remarkably strong cross-correlations between the phases of the kinematic dipole and the ILC 7 octupole, in full agreement with previous results. We further test the multipole range 2 < l < 100, by investigating the ratio between the l+m = even and l+m = odd parts of power spectra. We compare the results to simulations of a Gaussian random sky, and find significant departure from the statistically isotropic and homogeneous case, for a very broad range of multipoles. We found that for the most prominent peaks of our estimator, the phases of the corresponding harmonics are coherent with phases of the octupole. We believe, our test would be very useful for detections of various types of residuals of the foreground and systematic effects at a very broad range of multipoles 2 ≤ l ≤ 1500−3000 for the forthcoming PLANCK CMB map, before any conclusions about primordial non-Gaussianity and statistical anisotropy of the CMB.
Gershgorin, B.; Majda, A.J.
2011-02-20
A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants.
Broadaway, K Alaine; Cutler, David J; Duncan, Richard; Moore, Jacob L; Ware, Erin B; Jhun, Min A; Bielak, Lawrence F; Zhao, Wei; Smith, Jennifer A; Peyser, Patricia A; Kardia, Sharon L R; Ghosh, Debashis; Epstein, Michael P
2016-03-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
ERIC Educational Resources Information Center
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.
Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity.
Beasley, T Mark
2014-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity
Beasley, T. Mark
2013-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
Statistical auditing and randomness test of lotto k/N-type games
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.
2008-11-01
One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.
A statistical treatment of accelerated life test data for copper-water heat pipes
NASA Astrophysics Data System (ADS)
Murakami, M.; Arai, K.; Kojima, Y.
1988-03-01
A statistical method is proposed to treat accelerated life test data conducted at several elevated temperatures for a sufficient number of commercially available Cu-water heat pipes to predict the operation life. The temperature distribution measurements periodically carried out yield both data sets concerning the temperature drop and the gas column length as measures of noncondensible gas accumulation. The gas analysis with a mass spectrometer is also carried out to obtain the gas quantity data. A method of unified regression analysis to take account of the acceleration factor resulted from a number of elevated test temperatures is proposed to establish a method to predict the long term performance degradation from life test data. The mutual correlations among three kinds of data sets are also discussed.
Statistical characterization of negative control data in the Ames Salmonella/microsome test.
Hamada, C; Wada, T; Sakamoto, Y
1994-01-01
A statistical characterization of negative control data in the Ames Salmonella/microsome reverse mutation test was performed using data obtained at Takeda Analytical Research Laboratories during January 1989 to April 1990. The lot-to-lot variability of bacterial stock cultures and day-to-day variability of experiments were small for Salmonella typhimurium strains TA1535 and TA1537 and Escherichia coli WP2uvrA, but they were larger for S. typhimurium TA100. The number of revertant colonies for all test strains studied here followed Poisson distributions within the same day. The two-fold rule that is an empirical method to evaluate the Ames Salmonella/microsome test results has been widely used in Japan. This two-fold rule was evaluated statistically. The comparison-wise type I error rate was less than 0.05 for TA98, TA100, TA1535, TA1537, and WP2uvrA. Moreover, this rule is particularly conservative for TA100, for which the type I error rate was nearly 0. PMID:8187699
Day, N E; Byar, D P
1979-09-01
The two approaches in common use for the analysis of case-control studies are cross-classification by confounding variables, and modeling the logarithm of the odds ratio as a function of exposure and confounding variables. We show here that score statistics derived from the likelihood function in the latter approach are identical to the Mantel-Haenszel test statistics appropriate for the former approach. This identity holds in the most general situation considered, testing for marginal homogeneity in mK tables. This equivalence is demonstrated by a permutational argument which leads to a general likelihood expression in which the exposure variable may be a vector of discrete and/or continuous variables and in which more than two comparison groups may be considered. This likelihood can be used in analyzing studies in which there are multiple controls for each case or in which several disease categories are being compared. The possibility of including continuous variables makes this likelihood useful in situations that cannot be treated using the Mantel-Haenszel cross-classification approach. PMID:497345
Chi, Yunchan
2005-01-15
In clinical trials or drug development studies, researchers are often interested in identifying which treatments or dosages are more effective than the standard one. Recently, several multiple testing procedures based on weighted logrank tests have been proposed to compare several treatments with a control in a one-way layout where survival data are subject to random right-censorship. However, weighted logrank tests are based on ranks, and these tests might not be sensitive to the magnitude of the difference in survival times against a specific alternative. Therefore, it is desirable to develop a more robust and powerful multiple testing procedure. This paper proposes multiple testing procedures based on two-sample weighted Kaplan-Meier statistics, each comparing an individual treatment with the control, to determine which treatments are more effective than the control. The comparative results from a simulation study are presented and the implementation of these methods to the prostate cancer clinical trial and the renal carcinoma tumour study are presented. PMID:15515153
Atmospheric Array Loss Statistics Derived from Short Time Scale Site Test Interferometer Phase Data
NASA Astrophysics Data System (ADS)
Morabito, David D.; D'Addario, Larry R.
2014-08-01
NASA is interested in using the technique of arraying smaller-diameter antennas to increase effective aperture to replace the aging 70-m-diameter antennas of the Deep Space Network (DSN). Downlink arraying using the 34-m-diameter and 70-m-diameter antennas is routinely performed. Future scenarios include extending the technique to uplink arraying where a downlink signal may not be available. Atmospheric turbulence causes decorrelation of the arrayed signal, and becomes more severe at higher frequencies such as at the uplink allocations near 34 GHz and 40 GHz. This article expands the study initiated in a previous article that focused on average array loss statistics extracted from Site Test Interferometer (STI) data. In that study, cumulative distributions of the annual and monthly expected phasing loss were derived from STI data collected at the Goldstone and Canberra DSN complexes. For a two-element array, the average array loss cannot exceed 3 dB. This article considers the instantaneous (short time scale) array loss that sometimes exceeds 3 dB for a two-element array. We also consider cases of three-element arrays, which behave somewhat differently. The short time scale statistics of array loss at 7.15 GHz and 34.5 GHz are compared against the average array loss statistics for the best-case and worst-case weather months for the Goldstone and Canberra DSN sites.
Giambartolomei, Claudia; Vukcevic, Damjan; Schadt, Eric E; Franke, Lude; Hingorani, Aroon D; Wallace, Chris; Plagnol, Vincent
2014-05-01
Genetic association studies, in particular the genome-wide association study (GWAS) design, have provided a wealth of novel insights into the aetiology of a wide range of human diseases and traits, in particular cardiovascular diseases and lipid biomarkers. The next challenge consists of understanding the molecular basis of these associations. The integration of multiple association datasets, including gene expression datasets, can contribute to this goal. We have developed a novel statistical methodology to assess whether two association signals are consistent with a shared causal variant. An application is the integration of disease scans with expression quantitative trait locus (eQTL) studies, but any pair of GWAS datasets can be integrated in this framework. We demonstrate the value of the approach by re-analysing a gene expression dataset in 966 liver samples with a published meta-analysis of lipid traits including >100,000 individuals of European ancestry. Combining all lipid biomarkers, our re-analysis supported 26 out of 38 reported colocalisation results with eQTLs and identified 14 new colocalisation results, hence highlighting the value of a formal statistical test. In three cases of reported eQTL-lipid pairs (SYPL2, IFT172, TBKBP1) for which our analysis suggests that the eQTL pattern is not consistent with the lipid association, we identify alternative colocalisation results with SORT1, GCKR, and KPNB1, indicating that these genes are more likely to be causal in these genomic intervals. A key feature of the method is the ability to derive the output statistics from single SNP summary statistics, hence making it possible to perform systematic meta-analysis type comparisons across multiple GWAS datasets (implemented online at http://coloc.cs.ucl.ac.uk/coloc/). Our methodology provides information about candidate causal genes in associated intervals and has direct implications for the understanding of complex diseases as well as the design of drugs to
A clone-based statistical test for localizing disease genes using genomic mismatch scanning
Palmer, C.G.S.; Woodward, A.; Smalley, S.L.
1994-09-01
Genomic mismatch scanning (GMS) is a technique for isolating regions of DNA that are identical-by-descent (IBD) within pairs of relatives. GMS selected data are hybridized to an ordered array of DNA, e.g., metaphase chromosomes, YACs, to identify and localize enhanced region(s) of IBD across pairs of relatives affected with a trait of interest. If the trait has a genetic basis, it is reasonable to assume that the trait gene(s) will be located in these enhanced regions. Our approach to localize these enhanced regions is based on the availability of an ordered array of clones, e.g., YACs, which span the entire human genome. We use an exact binomial order statistic to develop a test for enhanced regions of IBD in sets of clones 1 cM in size selected for being biologically independent (i.e., separated by 50 cM). The test statistic is the maximum proportion of IBD pairs selected from the independent YACs within a set. Thus far, we have defined the power of the test under the alternative hypothesis of a single gene conditional on the maximum proportion IBD being located at the disease locus. As an example, for 60 grandparent-grandchild pairs, the exact power of the test with alpha=0.001 is 0.83 when the relative risk of the disease is 4.0 and the maximum proportion is at the disease locus. This method can be used in small samples and is not dependent on any specific mapping function.
Vardhanabhuti, Saran; Blakemore, Steven J; Clark, Steven M; Ghosh, Sujoy; Stephens, Richard J; Rajagopalan, Dilip
2006-01-01
Signal quantification and detection of differential expression are critical steps in the analysis of Affymetrix microarray data. Many methods have been proposed in the literature for each of these steps. The goal of this paper is to evaluate several signal quantification methods (GCRMA, RSVD, VSN, MAS5, and Resolver) and statistical methods for differential expression (t test, Cyber-T, SAM, LPE, RankProducts, Resolver RatioBuild). Our particular focus is on the ability to detect differential expression via statistical tests. We have used two different datasets for our evaluation. First, we have used the HG-U133 Latin Square spike in dataset developed by Affymetrix. Second, we have used data from an in-house rat liver transcriptomics study following 30 different drug treatments generated using the Affymetrix RAE230A chip. Our overall recommendation based on this study is to use GCRMA for signal quantification. For detection of differential expression, GCRMA coupled with Cyber-T or SAM is the best approach, as measured by area under the receiver operating characteristic (ROC) curve. The integrated pipeline in Resolver RatioBuild combining signal quantification and detection of differential expression is an equally good alternative for detecting differentially expressed genes. For most of the differential expression algorithms we considered, the performance using MAS5 signal quantification was inferior to that of the other methods we evaluated. PMID:17233564
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Observations in the statistical analysis of NBG-18 nuclear graphite strength tests
NASA Astrophysics Data System (ADS)
Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.
2012-01-01
The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Debate on GMOs health risks after statistical findings in regulatory tests.
de Vendômois, Joël Spiroux; Cellier, Dominique; Vélot, Christian; Clair, Emilie; Mesnage, Robin; Séralini, Gilles-Eric
2010-01-01
We summarize the major points of international debate on health risk studies for the main commercialized edible GMOs. These GMOs are soy, maize and oilseed rape designed to contain new pesticide residues since they have been modified to be herbicide-tolerant (mostly to Roundup) or to produce mutated Bt toxins. The debated alimentary chronic risks may come from unpredictable insertional mutagenesis effects, metabolic effects, or from the new pesticide residues. The most detailed regulatory tests on the GMOs are three-month long feeding trials of laboratory rats, which are biochemically assessed. The tests are not compulsory, and are not independently conducted. The test data and the corresponding results are kept in secret by the companies. Our previous analyses of regulatory raw data at these levels, taking the representative examples of three GM maize NK 603, MON 810, and MON 863 led us to conclude that hepatorenal toxicities were possible, and that longer testing was necessary. Our study was criticized by the company developing the GMOs in question and the regulatory bodies, mainly on the divergent biological interpretations of statistically significant biochemical and physiological effects. We present the scientific reasons for the crucially different biological interpretations and also highlight the shortcomings in the experimental protocols designed by the company. The debate implies an enormous responsibility towards public health and is essential due to nonexistent traceability or epidemiological studies in the GMO-producing countries. PMID:20941377
Debate on GMOs Health Risks after Statistical Findings in Regulatory Tests
de Vendômois, Joël Spiroux; Cellier, Dominique; Vélot, Christian; Clair, Emilie; Mesnage, Robin; Séralini, Gilles-Eric
2010-01-01
We summarize the major points of international debate on health risk studies for the main commercialized edible GMOs. These GMOs are soy, maize and oilseed rape designed to contain new pesticide residues since they have been modified to be herbicide-tolerant (mostly to Roundup) or to produce mutated Bt toxins. The debated alimentary chronic risks may come from unpredictable insertional mutagenesis effects, metabolic effects, or from the new pesticide residues. The most detailed regulatory tests on the GMOs are three-month long feeding trials of laboratory rats, which are biochemically assessed. The tests are not compulsory, and are not independently conducted. The test data and the corresponding results are kept in secret by the companies. Our previous analyses of regulatory raw data at these levels, taking the representative examples of three GM maize NK 603, MON 810, and MON 863 led us to conclude that hepatorenal toxicities were possible, and that longer testing was necessary. Our study was criticized by the company developing the GMOs in question and the regulatory bodies, mainly on the divergent biological interpretations of statistically significant biochemical and physiological effects. We present the scientific reasons for the crucially different biological interpretations and also highlight the shortcomings in the experimental protocols designed by the company. The debate implies an enormous responsibility towards public health and is essential due to nonexistent traceability or epidemiological studies in the GMO-producing countries. PMID:20941377
Statistical Analysis of Pure Tone Audiometry and Caloric Test in Herpes Zoster Oticus
Kim, Jin; Jung, Jinsei; Moon, In Seok; Lee, Ho-Ki
2008-01-01
Objectives Pure tone audiometry and caloric test in patients with herpes zoster oticus were performed to determine the biologic features of the varicella zoster virus (VZV) and the pathogenesis of vestibulocochlear nerve disease in herpes zoster oticus. Study Design A retrospective chart review of 160 patients with herpes zoster oticus was designed in order to determine the classic characteristics of vestibulocochlear nerve disease associated with the syndrome. Speech frequency and isolated high frequency acoustic thresholds were analyzed based on severity of facial paralysis and patient age. Patients without cochlear symptoms were selected randomly, and audiological function was evaluated. Patients with symptoms of vestibular dysfunction underwent the caloric test, and canal paresis was analyzed according to the severity of facial paralysis and the age of each patient. Results Among the 160 patients, 111 exhibited pure tone audiometry; 26 (79%) of the patients with cochlear symptoms and 44 (56%) of the patients without cochlear symptoms had abnormal audiological data. Among the patients without cochlear symptoms, 15 (19%) had hearing loss at speech frequency, and 42 (54%) had hearing loss isolated to high frequency. The incidence of cochlear symptoms in herpes zoster oticus was not related to the severity of facial paralysis. The incidence of patients with isolated high frequency hearing loss statistically increased with age, however the incidence of patients with speech frequency hearing loss did not increase. Thirteen patients complained vertigo, and the incidence of vestibular disturbances and the value of canal paresis in the caloric test increased to statistical significance in parallel with increasing severity of facial paralysis. Conclusion Mild or moderate cochlear symptoms with high frequency hearing loss were related to age, and severe vestibular symptoms were related to the severity of facial paralysis after onset of herpetic symptoms. This study might
ERIC Educational Resources Information Center
Zheng, Yinggan; Gierl, Mark J.; Cui, Ying
2010-01-01
This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…
Improved tests reveal that the accelarating moment release hypothesis is statistically insignificant
Hardebeck, J.L.; Felzer, K.R.; Michael, A.J.
2008-01-01
We test the hypothesis that accelerating moment release (AMR) is a precursor to large earthquakes, using data from California, Nevada, and Sumatra. Spurious cases of AMR can arise from data fitting because the time period, area, and sometimes magnitude range analyzed before each main shock are often optimized to produce the strongest AMR signal. Optimizing the search criteria can identify apparent AMR even if no robust signal exists. For both 1950-2006 California-Nevada M ??? 6.5 earthquakes and the 2004 M9.3 Sumatra earthquake, we can find two contradictory patterns in the pre-main shock earthquakes by data fitting: AMR and decelerating moment release. We compare the apparent AMR found in the real data to the apparent AMR found in four types of synthetic catalogs with no inherent AMR. When spatiotemporal clustering is included in the simulations, similar AMR signals are found by data fitting in both the real and synthetic data sets even though the synthetic data sets contain no real AMR. These tests demonstrate that apparent AMR may arise from a combination of data fitting and normal foreshock and aftershock activity. In principle, data-fitting artifacts could be avoided if the free parameters were determined from scaling relationships between the duration and spatial extent of the AMR pattern and the magnitude of the earthquake that follows it. However, we demonstrate that previously proposed scaling relationships are unstable, statistical artifacts caused by the use of a minimum magnitude for the earthquake catalog that scales with the main shock magnitude. Some recent AMR studies have used spatial regions based on hypothetical stress loading patterns, rather than circles, to select the data. We show that previous tests were biased and that unbiased tests do not find this change to the method to be an improvement. The use of declustered catalogs has also been proposed to eliminate the effect of clustering but we demonstrate that this does not increase the
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2005-01-01
To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…
NASA Astrophysics Data System (ADS)
Anderson, Greg; Johnson, Hadley
1999-09-01
Over the past several years, many investigators have argued that static stress changes caused by large earthquakes influence the spatial and temporal distributions of subsequent regional seismicity, with earthquakes occurring preferentially in areas of stress increase and reduced seismicity where stress decreases. Some workers have developed quantitative methods to test for the existence of such static stress triggering, but no firm consensus has yet been reached as to the significance of these effects. We have developed a new test for static stress triggering in which we compute the change in Coulomb stress on the focal mechanism nodal planes of a set of events spanning the occurrence of a large earthquake. We compare the statistical distributions of these stress changes for events before and after the mainshock to decide if we can reject the hypothesis that these distributions are the same. Such rejection would be evidence for stress triggering. We have applied this test to the November 24, 1987, Elmore Ranch/Superstition Hills earthquake sequence and find that those post-mainshock events that experienced stress increases of at least 0.01-0.03 MPa (0.1-0.3 bar) or that occurred from 1.4 to 2.8 years after the mainshocks are consistent with having been triggered by mainshock-generated static stress changes.
Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind
2016-01-01
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances
NASA Astrophysics Data System (ADS)
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-01
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances.
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-22
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously. PMID:27152779
Early aftershocks statistics: first results of prospective test of alarm-based model (EAST)
NASA Astrophysics Data System (ADS)
Shebalin, Peter; Narteau, Clement; Holschneider, Matthias; Schorlemmer, Danijel
2010-05-01
It was shown recently that the c-value systematically changes across different faulting styles and thus may reflect the state of stress. Hypothesizing that smaller c-values indicate places more vulnerable to moderate and large earthquakes, we suggested a simple alarm-based forecasting model, called EAST, submitted for the test in CSEP in California (3-month, M ≥ 4 class); the official test was started on July 1, 2009. We replaced the c-value by more robust parameter, the geometric average of the aftershock elapsed times (the ea-value). We normalize the ea-value calculated for last 5 years by the value calculated for preceding 25 years. When and where the normalized ea-value exceeds a given threshold, an 'alarm' is issued: an earthquake is expected to occur within the next 3 months. Retrospective tests of the model show good and stable results (even better for targets M ≥ 5). During the first 6 months of the prospective test 22 target earthquakes took place in the testing area. 14 of them (more than 60%) were forecasted with the alarm threshold resulting in only 1% of space-time occupied by alarms (5% if space is normalized by past earthquake frequencies). This highly encouraging result was obtained mostly due to successful forecast of the sequence of 11 earthquakes near Lone Pine in 1-9 October 2009. However, if we disregard aftershocks as targets, then 4 out of 9 main shocks occurred in alarms with normalized ea-value threshold resulting in 2.5% of normalized space-time occupied by alarms, the result is also impossible to get by chance at a significance level 1%. To expand the evaluation of the EAST model relative to larger number of forecast models, we have developed its frequency-based version. We estimate the expected frequency of earthquakes using joint retrospective statistics of targets and the ea-value.
NASA Astrophysics Data System (ADS)
Guo, Bingjie; Bitner-Gregersen, Elzbieta Maria; Sun, Hui; Block Helmers, Jens
2013-04-01
Earlier investigations have indicated that proper prediction of nonlinear loads and responses due to nonlinear waves is important for ship safety in extreme seas. However, the nonlinear loads and responses in extreme seas have not been sufficiently investigated yet, particularly when rogue waves are considered. A question remains whether the existing linear codes can predict nonlinear loads and responses with a satisfactory accuracy and how large the deviations from linear predictions are. To indicate it response statistics have been studied based on the model tests carried out with a LNG tanker in the towing tank of the Technical University of Berlin (TUB), and compared with the statistics derived from numerical simulations using the DNV code WASIM. It is a potential code for wave-ship interaction based on 3D Panel method, which can perform both linear and nonlinear simulation. The numerical simulations with WASIM and the model tests in extreme and rogue waves have been performed. The analysis of ship motions (heave and pitch) and bending moments, in both regular and irregular waves, is performed. The results from the linear and nonlinear simulations are compared with experimental data to indicate the impact of wave non-linearity on loads and response calculations when the code based on the Rankine Panel Method is used. The study shows that nonlinearities may have significant effect on extreme motions and bending moment generated by strongly nonlinear waves. The effect of water depth on ship responses is also demonstrated using numerical simulations. Uncertainties related to the results are discussed, giving particular attention to sampling variability.
Stanisavljevic, Dejana; Trajkovic, Goran; Marinkovic, Jelena; Bukumiric, Zoran; Cirkovic, Andja; Milic, Natasa
2014-01-01
Background Medical statistics has become important and relevant for future doctors, enabling them to practice evidence based medicine. Recent studies report that students’ attitudes towards statistics play an important role in their statistics achievements. The aim of the study was to test the psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS) in order to acquire a valid instrument to measure attitudes inside the Serbian educational context. Methods The validation study was performed on a cohort of 417 medical students who were enrolled in an obligatory introductory statistics course. The SATS adaptation was based on an internationally accepted methodology for translation and cultural adaptation. Psychometric properties of the Serbian version of the SATS were analyzed through the examination of factorial structure and internal consistency. Results Most medical students held positive attitudes towards statistics. The average total SATS score was above neutral (4.3±0.8), and varied from 1.9 to 6.2. Confirmatory factor analysis validated the six-factor structure of the questionnaire (Affect, Cognitive Competence, Value, Difficulty, Interest and Effort). Values for fit indices TLI (0.940) and CFI (0.961) were above the cut-off of ≥0.90. The RMSEA value of 0.064 (0.051–0.078) was below the suggested value of ≤0.08. Cronbach’s alpha of the entire scale was 0.90, indicating scale reliability. In a multivariate regression model, self-rating of ability in mathematics and current grade point average were significantly associated with the total SATS score after adjusting for age and gender. Conclusion Present study provided the evidence for the appropriate metric properties of the Serbian version of SATS. Confirmatory factor analysis validated the six-factor structure of the scale. The SATS might be reliable and a valid instrument for identifying medical students’ attitudes towards statistics in the Serbian
ERIC Educational Resources Information Center
Tabor, Josh
2010-01-01
On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)
Combining test statistics and models in bootstrapped model rejection: it is a balancing act
2014-01-01
Background Model rejections lie at the heart of systems biology, since they provide conclusive statements: that the corresponding mechanistic assumptions do not serve as valid explanations for the experimental data. Rejections are usually done using e.g. the chi-square test (χ2) or the Durbin-Watson test (DW). Analytical formulas for the corresponding distributions rely on assumptions that typically are not fulfilled. This problem is partly alleviated by the usage of bootstrapping, a computationally heavy approach to calculate an empirical distribution. Bootstrapping also allows for a natural extension to estimation of joint distributions, but this feature has so far been little exploited. Results We herein show that simplistic combinations of bootstrapped tests, like the max or min of the individual p-values, give inconsistent, i.e. overly conservative or liberal, results. A new two-dimensional (2D) approach based on parametric bootstrapping, on the other hand, is found both consistent and with a higher power than the individual tests, when tested on static and dynamic examples where the truth is known. In the same examples, the most superior test is a 2D χ2vsχ2, where the second χ2-value comes from an additional help model, and its ability to describe bootstraps from the tested model. This superiority is lost if the help model is too simple, or too flexible. If a useful help model is found, the most powerful approach is the bootstrapped log-likelihood ratio (LHR). We show that this is because the LHR is one-dimensional, because the second dimension comes at a cost, and because LHR has retained most of the crucial information in the 2D distribution. These approaches statistically resolve a previously published rejection example for the first time. Conclusions We have shown how to, and how not to, combine tests in a bootstrap setting, when the combination is advantageous, and when it is advantageous to include a second model. These results also provide a deeper
DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data
Harris, S.P.
1997-09-18
This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Either new DWPF analytical method could result in a two to three fold improvement in sample analysis time.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples.The insert sample is named after the initial trials which placed the container inside the sample (peanut) vials. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock
Testing of an erosion-based landform evolution model using objective statistics
NASA Astrophysics Data System (ADS)
Willgoose, G.; Hancock, G.; Kuczera, G.
2003-04-01
Landform evolution models are increasingly being used to assess the long-term safety of repositories of hazardous waste under the action of erosion. The timescales required for safe containment typically vary from 200 to 1000 years. While these models are based on observed physics, small variations in the physics modelled (within the range of currently accepted agricultural erosion models) result in significant differences in the predictions at 1000 years. Testing the validity of landform evolution models is then crucial. These tests must be quantitative because decisions are being made based on the quantitative predictions of the model. These tests must also be objective. A difficulty is designing the tests is that we are unable to perform repeatable experiments over timescales of 200--1000 years. A range of experiments that indirectly address the issue of timescale will be discussed including accelerated laboratory experiments, degraded mine landscapes and undisturbed natural landscapes. A statistical framework will be presented that addresses the unrepeatibility of field data, where each field site is unique. The methodology involves developing error bands for model predictions using Monte-Carlo simulation, comparing these bands with observed field data and then assessing whether the field data is likely to have come from the probability distribution of model predictions. The methodology is demonstrated using the first author's SIBERIA landform evolution model on an undisturbed field catchment in northern Australia at Tin Camp Creek. It is concluded that SIBERIA does a good job of modelling the observed geomorphology. The value of the methodology for designing experiments in earth science and environmental applications will also be discussed.
Statistical testing of the full-range leadership theory in nursing.
Kanste, Outi; Kääriäinen, Maria; Kyngäs, Helvi
2009-12-01
The aim of this study is to test statistically the structure of the full-range leadership theory in nursing. The data were gathered by postal questionnaires from nurses and nurse leaders working in healthcare organizations in Finland. A follow-up study was performed 1 year later. The sample consisted of 601 nurses and nurse leaders, and the follow-up study had 78 respondents. Theory was tested through structural equation modelling, standard regression analysis and two-way anova. Rewarding transformational leadership seems to promote and passive laissez-faire leadership to reduce willingness to exert extra effort, perceptions of leader effectiveness and satisfaction with the leader. Active management-by-exception seems to reduce willingness to exert extra effort and perception of leader effectiveness. Rewarding transformational leadership remained as a strong explanatory factor of all outcome variables measured 1 year later. The data supported the main structure of the full-range leadership theory, lending support to the universal nature of the theory. PMID:19702652
Penarrubia, Jorge; Walker, Matthew G.
2012-11-20
We introduce the Minimum Entropy Method, a simple statistical technique for constraining the Milky Way gravitational potential and simultaneously testing different gravity theories directly from 6D phase-space surveys and without adopting dynamical models. We demonstrate that orbital energy distributions that are separable (i.e., independent of position) have an associated entropy that increases under wrong assumptions about the gravitational potential and/or gravity theory. Of known objects, 'cold' tidal streams from low-mass progenitors follow orbital distributions that most nearly satisfy the condition of separability. Although the orbits of tidally stripped stars are perturbed by the progenitor's self-gravity, systematic variations of the energy distribution can be quantified in terms of the cross-entropy of individual tails, giving further sensitivity to theoretical biases in the host potential. The feasibility of using the Minimum Entropy Method to test a wide range of gravity theories is illustrated by evolving restricted N-body models in a Newtonian potential and examining the changes in entropy introduced by Dirac, MONDian, and f(R) gravity modifications.
FLAGS: A Flexible and Adaptive Association Test for Gene Sets Using Summary Statistics.
Huang, Jianfei; Wang, Kai; Wei, Peng; Liu, Xiangtao; Liu, Xiaoming; Tan, Kai; Boerwinkle, Eric; Potash, James B; Han, Shizhong
2016-03-01
Genome-wide association studies (GWAS) have been widely used for identifying common variants associated with complex diseases. Despite remarkable success in uncovering many risk variants and providing novel insights into disease biology, genetic variants identified to date fail to explain the vast majority of the heritability for most complex diseases. One explanation is that there are still a large number of common variants that remain to be discovered, but their effect sizes are generally too small to be detected individually. Accordingly, gene set analysis of GWAS, which examines a group of functionally related genes, has been proposed as a complementary approach to single-marker analysis. Here, we propose a FL: exible and A: daptive test for G: ene S: ets (FLAGS), using summary statistics. Extensive simulations showed that this method has an appropriate type I error rate and outperforms existing methods with increased power. As a proof of principle, through real data analyses of Crohn's disease GWAS data and bipolar disorder GWAS meta-analysis results, we demonstrated the superior performance of FLAGS over several state-of-the-art association tests for gene sets. Our method allows for the more powerful application of gene set analysis to complex diseases, which will have broad use given that GWAS summary results are increasingly publicly available. PMID:26773050
Test statistics for the identification of assembly neurons in parallel spike trains.
Picado Muiño, David; Borgelt, Christian
2015-01-01
In recent years numerous improvements have been made in multiple-electrode recordings (i.e., parallel spike-train recordings) and spike sorting to the extent that nowadays it is possible to monitor the activity of up to hundreds of neurons simultaneously. Due to these improvements it is now potentially possible to identify assembly activity (roughly understood as significant synchronous spiking of a group of neurons) from these recordings, which-if it can be demonstrated reliably-would significantly improve our understanding of neural activity and neural coding. However, several methodological problems remain when trying to do so and, among them, a principal one is the combinatorial explosion that one faces when considering all potential neuronal assemblies, since in principle every subset of the recorded neurons constitutes a candidate set for an assembly. We present several statistical tests to identify assembly neurons (i.e., neurons that participate in a neuronal assembly) from parallel spike trains with the aim of reducing the set of neurons to a relevant subset of them and this way ease the task of identifying neuronal assemblies in further analyses. These tests are an improvement of those introduced in the work by Berger et al. (2010) based on additional features like spike weight or pairwise overlap and on alternative ways to identify spike coincidences (e.g., by avoiding time binning, which tends to lose information). PMID:25866503
Statistical methods for the analysis of a screening test for chronic beryllium disease
Frome, E.L.; Neubert, R.L.; Smith, M.H.; Littlefield, L.G.; Colyer, S.P.
1994-10-01
The lymphocyte proliferation test (LPT) is a noninvasive screening procedure used to identify persons who may have chronic beryllium disease. A practical problem in the analysis of LPT well counts is the occurrence of outlying data values (approximately 7% of the time). A log-linear regression model is used to describe the expected well counts for each set of test conditions. The variance of the well counts is proportional to the square of the expected counts, and two resistant regression methods are used to estimate the parameters of interest. The first approach uses least absolute values (LAV) on the log of the well counts to estimate beryllium stimulation indices (SIs) and the coefficient of variation. The second approach uses a resistant regression version of maximum quasi-likelihood estimation. A major advantage of the resistant regression methods is that it is not necessary to identify and delete outliers. These two new methods for the statistical analysis of the LPT data and the outlier rejection method that is currently being used are applied to 173 LPT assays. The authors strongly recommend the LAV method for routine analysis of the LPT.
A Tool Preference Choice Method for RNA Secondary Structure Prediction by SVM with Statistical Tests
Hor, Chiou-Yi; Yang, Chang-Biau; Chang, Chia-Hung; Tseng, Chiou-Ting; Chen, Hung-Hsin
2013-01-01
The Prediction of RNA secondary structures has drawn much attention from both biologists and computer scientists. Many useful tools have been developed for this purpose. These tools have their individual strengths and weaknesses. As a result, based on support vector machines (SVM), we propose a tool choice method which integrates three prediction tools: pknotsRG, RNAStructure, and NUPACK. Our method first extracts features from the target RNA sequence, and adopts two information-theoretic feature selection methods for feature ranking. We propose a method to combine feature selection and classifier fusion in an incremental manner. Our test data set contains 720 RNA sequences, where 225 pseudoknotted RNA sequences are obtained from PseudoBase, and 495 nested RNA sequences are obtained from RNA SSTRAND. The method serves as a preprocessing way in analyzing RNA sequences before the RNA secondary structure prediction tools are employed. In addition, the performance of various configurations is subject to statistical tests to examine their significance. The best base-pair accuracy achieved is 75.5%, which is obtained by the proposed incremental method, and is significantly higher than 68.8%, which is associated with the best predictor, pknotsRG. PMID:23641141
Létourneau, Daniel McNiven, Andrea; Keller, Harald; Wang, An; Amin, Md Nurul; Pearce, Jim; Norrlinger, Bernhard; Jaffray, David A.
2014-12-15
Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods: The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves
A statistical test on the reliability of the non-coevality of stars in binary systems
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2016-03-01
Aims: We develop a statistical test on the expected difference in age estimates of two coeval stars in detached double-lined eclipsing binary systems that are only caused by observational uncertainties. We focus on stars in the mass range [0.8; 1.6] M⊙, with an initial metallicity [Fe/H] from -0.55 to 0.55 dex, and on stars in the main-sequence phase. Methods: The ages were obtained by means of the SCEPtER technique, a maximum-likelihood procedure relying on a pre-computed grid of stellar models. The observational constraints used in the recovery procedure are stellar mass, radius, effective temperature, and metallicity [Fe/H]. To check the effect of the uncertainties affecting observations on the (non-)coevality assessment, the chosen observational constraints were subjected to a Gaussian perturbation before applying the SCEPtER code. We defined the statistic W computed as the ratio of the absolute difference of estimated ages for the two stars over the age of the older one. We determined the critical values of this statistics above which coevality can be rejected in dependence on the mass of the two stars, on the initial metallicity [Fe/H], and on the evolutionary stage of the primary star. Results: The median expected difference in the reconstructed age between the coeval stars of a binary system - caused alone by the observational uncertainties - shows a strong dependence on the evolutionary stage. This ranges from about 20% for an evolved primary star to about 75% for a near ZAMS primary. The median difference also shows an increase with the mass of the primary star from 20% for 0.8 M⊙ stars to about 50% for 1.6 M⊙ stars. The reliability of these results was checked by repeating the process with a grid of stellar models computed by a different evolutionary code; the median difference in the critical values was only 0.01. We show that the W test is much more sensible to age differences in the binary system components than the alternative approach of
KURETZKI, Carlos Henrique; CAMPOS, Antônio Carlos Ligocki; MALAFAIA, Osvaldo; SOARES, Sandramara Scandelari Kusano de Paula; TENÓRIO, Sérgio Bernardo; TIMI, Jorge Rufino Ribas
2016-01-01
Background: The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Aim: Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . Methods: The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. Results: There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. Conclusion: The incorporation of the automatic SINPE(c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research. PMID:27120732
Chlorine-36 data at Yucca Mountain: Statistical tests of conceptual models for unsaturated-zone flow
Campbell, K.; Wolfsberg, A.; Fabryka-Martin, J.; Sweetkind, D.
2003-01-01
An extensive set of chlorine-36 (36Cl) data has been collected in the Exploratory Studies Facility (ESF), an 8-km-long tunnel at Yucca Mountain, Nevada, for the purpose of developing and testing conceptual models of flow and transport in the unsaturated zone (UZ) at this site. At several locations, the measured values of 36Cl/Cl ratios for salts leached from rock samples are high enough to provide strong evidence that at least a small component of bomb-pulse 36Cl, fallout from atmospheric testing of nuclear devices in the 1950s and 1960s, was measured, implying that some fraction of the water traveled from the ground surface through 200-300 m of unsaturated rock to the level of the ESF during the last 50 years. These data are analyzed here using a formal statistical approach based on log-linear models to evaluate alternative conceptual models for the distribution of such fast flow paths. The most significant determinant of the presence of bomb-pulse 36Cl in a sample from the welded Topopah Spring unit (TSw) is the structural setting from which the sample was collected. Our analysis generally supports the conceptual model that a fault that cuts through the nonwelded Paintbrush tuff unit (PTn) that overlies the TSw is required in order for bomb-pulse 36Cl to be transmitted to the sample depth in less than 50 years. Away from PTn-cutting faults, the ages of water samples at the ESF appear to be a strong function of the thickness of the nonwelded tuff between the ground surface and the ESF, due to slow matrix flow in that unit. ?? 2002 Elsevier Science B.V. All rights reserved.
Campbell, Katherine; Wolfsberg, Andrew; Fabryka-Martin, June; Sweetkind, Donald
2003-01-01
An extensive set of chlorine-36 (36Cl) data has been collected in the Exploratory Studies Facility (ESF), an 8-km-long tunnel at Yucca Mountain, Nevada, for the purpose of developing and testing conceptual models of flow and transport in the unsaturated zone (UZ) at this site. At several locations, the measured values of 36Cl/Cl ratios for salts leached from rock samples are high enough to provide strong evidence that at least a small component of bomb-pulse 36Cl, fallout from atmospheric testing of nuclear devices in the 1950s and 1960s, was measured, implying that some fraction of the water traveled from the ground surface through 200-300 m of unsaturated rock to the level of the ESF during the last 50 years. These data are analyzed here using a formal statistical approach based on log-linear models to evaluate alternative conceptual models for the distribution of such fast flow paths. The most significant determinant of the presence of bomb-pulse 36Cl in a sample from the welded Topopah Spring unit (TSw) is the structural setting from which the sample was collected. Our analysis generally supports the conceptual model that a fault that cuts through the nonwelded Paintbrush tuff unit (PTn) that overlies the TSw is required in order for bomb-pulse 36Cl to be transmitted to the sample depth in less than 50 years. Away from PTn-cutting faults, the ages of water samples at the ESF appear to be a strong function of the thickness of the nonwelded tuff between the ground surface and the ESF, due to slow matrix flow in that unit. PMID:12714284
Statistical Risk Estimation for Communication System Design: Results of the HETE-2 Test Case
NASA Astrophysics Data System (ADS)
Babuscia, A.; Cheung, K.-M.
2014-05-01
The Statistical Risk Estimation (SRE) technique described in this article is a methodology to quantify the likelihood that the major design drivers of mass and power of a space system meet the spacecraft and mission requirements and constraints through the design and development lifecycle. The SRE approach addresses the long-standing challenges of small sample size and unclear evaluation path of a space system, and uses a combination of historical data and expert opinions to estimate risk. Although the methodology is applicable to the entire spacecraft, this article is focused on a specific subsystem: the communication subsystem. Using this approach, the communication system designers will be able to evaluate and to compare different communication architectures in a risk trade-off perspective. SRE was introduced in two previous papers. This article aims to present additional results of the methodology by adding a new test case from a university mission, the High-Energy Transient Experiment (HETE)-2. The results illustrate the application of SRE to estimate the risks of exceeding constraints in mass and power, hence providing crucial risk information to support a project's decision on requirements rescope and/or system redesign.
Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Dorssaeler, Alain Van; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne
2015-01-01
This data article describes a controlled, spiked proteomic dataset for which the “ground truth” of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values. PMID:26862574
Longitudinal change detection in diffusion MRI using multivariate statistical testing on tensors.
Grigis, Antoine; Noblet, Vincent; Heitz, Fabrice; Blanc, Frédéric; de Sèze, Jérome; Kremer, Stéphane; Rumbach, Lucien; Armspach, Jean-Paul
2012-05-01
This paper presents a longitudinal change detection framework for detecting relevant modifications in diffusion MRI, with application to neuromyelitis optica (NMO) and multiple sclerosis (MS). The core problem is to identify image regions that are significantly different between two scans. The proposed method is based on multivariate statistical testing which was initially introduced for tensor population comparison. We use this method in the context of longitudinal change detection by considering several strategies to build sets of tensors characterizing the variability of each voxel. These strategies make use of the variability existing in the diffusion weighted images (thanks to a bootstrap procedure), or in the spatial neighborhood of the considered voxel, or a combination of both. Results on synthetic evolutions and on real data are presented. Interestingly, experiments on NMO patients highlight the ability of the proposed approach to detect changes in the normal-appearing white matter (according to conventional MRI) that are related with physical status outcome. Experiments on MS patients highlight the ability of the proposed approach to detect changes in evolving and non-evolving lesions (according to conventional MRI). These findings might open promising prospects for the follow-up of NMO and MS pathologies. PMID:22387171
ERIC Educational Resources Information Center
Jones, Andrew T.
2011-01-01
Practitioners often depend on item analysis to select items for exam forms and have a variety of options available to them. These include the point-biserial correlation, the agreement statistic, the B index, and the phi coefficient. Although research has demonstrated that these statistics can be useful for item selection, no research as of yet has…
The T(ea) Test: Scripted Stories Increase Statistical Method Selection Skills
ERIC Educational Resources Information Center
Hackathorn, Jana; Ashdown, Brien
2015-01-01
To teach statistics, teachers must attempt to overcome pedagogical obstacles, such as dread, anxiety, and boredom. There are many options available to teachers that facilitate a pedagogically conducive environment in the classroom. The current study examined the effectiveness of incorporating scripted stories and humor into statistical method…
ERIC Educational Resources Information Center
Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza
2014-01-01
This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…
The Effects of Pre-Lecture Quizzes on Test Anxiety and Performance in a Statistics Course
ERIC Educational Resources Information Center
Brown, Michael J.; Tallon, Jennifer
2015-01-01
The purpose of our study was to examine the effects of pre-lecture quizzes in a statistics course. Students (N = 70) from 2 sections of an introductory statistics course served as participants in this study. One section completed pre-lecture quizzes whereas the other section did not. Completing pre-lecture quizzes was associated with improved exam…
Brown, Geoffrey W.; Sandstrom, Mary M.; Preston, Daniel N.; Pollard, Colin J.; Warner, Kirstin F.; Sorensen, Daniel N.; Remmers, Daniel L.; Phillips, Jason J.; Shelley, Timothy J.; Reyes, Jose A.; Hsu, Peter C.; Reynolds, John G.
2014-04-10
In this study, the Integrated Data Collection Analysis (IDCA) program has conducted a proficiency test for small-scale safety and thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results from this test for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Class 5 Type II standard. The material was tested as a well-characterized standard several times during the proficiency test to assess differences among participants and the range of results that may arise for well-behaved explosive materials.
Brown, Geoffrey W.; Sandstrom, Mary M.; Preston, Daniel N.; Pollard, Colin J.; Warner, Kirstin F.; Sorensen, Daniel N.; Remmers, Daniel L.; Phillips, Jason J.; Shelley, Timothy J.; Reyes, Jose A.; et al
2014-11-17
In this study, the Integrated Data Collection Analysis (IDCA) program has conducted a proficiency test for small-scale safety and thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results from this test for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Class 5 Type II standard. The material was tested as a well-characterized standard several times during the proficiency test to assess differences among participants and the range of results that may arise for well-behaved explosive materials.
NASA Astrophysics Data System (ADS)
Shiraishi, Maresuke; Hikage, Chiaki; Namba, Ryo; Namikawa, Toshiya; Hazumi, Masashi
2016-08-01
The B -mode polarization in the cosmic microwave background (CMB) anisotropies at large angular scales provides compelling evidence for the primordial gravitational waves (GWs). It is often stated that a discovery of the GWs establishes the quantum fluctuation of vacuum during the cosmic inflation. Since the GWs could also be generated by source fields, however, we need to check if a sizable signal exists due to such source fields before reaching a firm conclusion when the B mode is discovered. Source fields of particular types can generate non-Gaussianity (NG) in the GWs. Testing statistics of the B mode is a powerful way of detecting such NG. As a concrete example, we show a model in which gauge field sources chiral GWs via a pseudoscalar coupling and forecast the detection significance at the future CMB satellite LiteBIRD. Effects of residual foregrounds and lensing B mode are both taken into account. We find the B -mode bispectrum "BBB" is in particular sensitive to the source-field NG, which is detectable at LiteBIRD with a >3 σ significance. Therefore the search for the BBB will be indispensable toward unambiguously establishing quantum fluctuation of vacuum when the B mode is discovered. We also introduced the Minkowski functional to detect the NGs. While we find that the Minkowski functional is less efficient than the harmonic-space bispectrum estimator, it still serves as a useful cross-check. Finally, we also discuss the possibility of extracting clean information on parity violation of GWs and new types of parity-violating observables induced by lensing.
An objective statistical test for eccentricity forcing of Oligo-Miocene climate
NASA Astrophysics Data System (ADS)
Proistosescu, C.; Huybers, P.; Maloof, A. C.
2008-12-01
We seek a maximally objective test for the presence of orbital features in Oligocene and Miocene δ18O records from marine sediments. Changes in Earth's orbital eccentricity are thought to be an important control on the long term variability of climate during the Oligocene and Miocene Epochs. However, such an important control from eccentricity is surprising because eccentricity has relatively little influence on Earth's annual average insolation budget. Nevertheless, if significant eccentricity variability is present, it would provide important insight into the operation of the climate system at long timescales. Here we use previously published data, but using a chronology which is initially independent of orbital assumptions, to test for the presence of eccentricity period variability in the Oligocene/Miocene sediment records. In contrast to the sawtooth climate record of the Pleistocene, the Oligocene and Miocene climate record appears smooth and symmetric and does not reset itself every hundred thousand years. This smooth variation, as well as the time interval spanning many eccentricity periods makes Oligocene and Miocene paleorecords very suitable for evaluating the importance of eccentricity forcing. First, we construct time scales depending only upon the ages of geomagnetic reversals with intervening ages linearly interpolated with depth. Such a single age-depth relationship is, however, too uncertain to assess whether orbital features are present. Thus, we construct a second depth-derived age-model by averaging ages across multiple sediment cores which have, at least partly, independent accumulation rate histories. But ages are still too uncertain to permit unambiguous detection of orbital variability. Thus we employ limited tuning assumptions and measure the degree by orbital period variability increases using spectral power estimates. By tuning we know that we are biasing the record toward showing orbital variations, but we account for this bias in our
ERIC Educational Resources Information Center
Callamaras, Peter
1983-01-01
This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)
ERIC Educational Resources Information Center
Meyer, Donald L.
Bayesian statistical methodology and its possible uses in the behavioral sciences are discussed in relation to the solution of problems in both the use and teaching of fundamental statistical methods, including confidence intervals, significance tests, and sampling. The Bayesian model explains these statistical methods and offers a consistent…
Using the {delta}{sub 3} statistic to test for missed levels in neutron resonance data
Mulhall, Declan
2009-03-31
The {delta}{sub 3}(L) statistic is studied as a possible tool to detect missing levels in the neutron resonance data of odd-A nuclei. A {delta}{sub 3}(L) analysis of neutron resonance data is compared with the results of a maximum likelihood method applied to the level spacing distribution. The {delta}{sub 3}(L) statistic compares favorably with the level spacing distribution as a tool to gauge the completeness of the data.
Technology Transfer Automated Retrieval System (TEKTRAN)
Whether a required Salmonella test series is passed or failed depends not only on the presence of the bacteria, but also on the methods for taking the samples, the methods for culturing the samples, and the statistics associated with the sampling plan. A spreadsheet program was used to perform a Mo...
Technology Transfer Automated Retrieval System (TEKTRAN)
Whether a required Salmonella test series is passed or failed depends not only on the presence of the bacteria, but also on the methods for taking samples, the methods for culturing samples, and the statistics associated with the sampling plan. The pass-fail probabilities of the two-class attribute...
Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F
2013-08-01
To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will
Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F
2013-01-01
Abstract To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches – for example, analysis of variance (ANOVA) – are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
NASA Astrophysics Data System (ADS)
Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal
2013-04-01
The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the
NASA Astrophysics Data System (ADS)
Hilborn, Robert C.
1997-04-01
The connection between the spin of particles and the permutation symmetry ("statistics") of multiparticle states lies at the heart of much of atomic, molecular, condensed matter, and nuclear physics. The spin-statistics theorem of relativistic quantum field theory seems to provide a theoretical basis for this connection. There are, however, loopholes (O. W. Greenberg, Phys. Rev. D 43, 4111 (1991).) that allow for a field theory of identical particles whose statistics interpolate smoothly between that of bosons and fermions. Thus, it is up to experiment to reveal how closely nature follows the usual spin- statistics connection. After reviewing experiments that provide stringent limits on possible violations of the spin-statistics connection for electrons, I shall describe recent analogous experiments for spin-0 particles (R. C. Hilborn and C. L. Yuca, Phys. Rev. Lett. 76, 2844 (1996).) using diode laser spectroscopy of the A-band of molecular oxygen near 760 nm. These experiments show that the probability of finding two ^16O nuclei (spin-0 particles) in an antisymmetric state is less than 1ppm. I shall also discuss proposals to test the spin-statistics connection for photons.
ERIC Educational Resources Information Center
Adams, David R.
1977-01-01
Discusses the application of the Kolmogorov-Smirnov two-sample tests, as an alternative to the Chi-square test, for survey research problems in business education and includes a computer program written for the convenience of researchers. The two-sample test is recommended for differentiating independent distributions. (MF)
Nomogram for Obtaining Z-Test Statistic from Kendall's S and Sample Size 10 to 50.
ERIC Educational Resources Information Center
Graney, Marshall J.
1979-01-01
Kendall's S is often used for measuring magnitude of bivariate association in social and behavioral research. This nomogram permits a research analyst to make rapid and accurate evaluation of the statistical significance of S without recourse to formulae or computations in all except borderline cases. (Author/CTM)
The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups
ERIC Educational Resources Information Center
Pero-Cebollero, Maribel; Guardia-Olmos, Joan
2013-01-01
In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…
Comment on a Wilcox Test Statistic for Comparing Means When Variances Are Unequal.
ERIC Educational Resources Information Center
Hsiung, Tung-Hsing; And Others
1994-01-01
The alternative proposed by Wilcox (1989) to the James second-order statistic for comparing population means when variances are heterogeneous can sometimes be invalid. The degree to which the procedure is invalid depends on differences in sample size, the expected values of the observations, and population variances. (SLD)
Basic Mathematics Test Predicts Statistics Achievement and Overall First Year Academic Success
ERIC Educational Resources Information Center
Fonteyne, Lot; De Fruyt, Filip; Dewulf, Nele; Duyck, Wouter; Erauw, Kris; Goeminne, Katy; Lammertyn, Jan; Marchant, Thierry; Moerkerke, Beatrijs; Oosterlinck, Tom; Rosseel, Yves
2015-01-01
In the psychology and educational science programs at Ghent University, only 36.1% of the new incoming students in 2011 and 2012 passed all exams. Despite availability of information, many students underestimate the scientific character of social science programs. Statistics courses are a major obstacle in this matter. Not all enrolling students…
Accuracy of Estimates and Statistical Power for Testing Meditation in Latent Growth Curve Modeling
ERIC Educational Resources Information Center
Cheong, JeeWon
2011-01-01
The latent growth curve modeling (LGCM) approach has been increasingly utilized to investigate longitudinal mediation. However, little is known about the accuracy of the estimates and statistical power when mediation is evaluated in the LGCM framework. A simulation study was conducted to address these issues under various conditions including…
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
ERIC Educational Resources Information Center
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
ERIC Educational Resources Information Center
American Council on Education, Washington, DC. General Educational Development Testing Service.
Data are presented for applicants taking the test of the General Educational Development Testing Service (GEDTS) under these categories: United States, states and territories, VA hospitals and GEDTS offices, Canada and provinces, and high school level GED testing at official centers from 1949-1973 for veterans, nonveterans, and unclassified…
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
ERIC Educational Resources Information Center
Reese, Lynda M.
This study extended prior Law School Admission Council (LSAC) research related to the item response theory (IRT) local item independence assumption into the realm of classical test theory. Initially, results from the Law School Admission Test (LSAT) and two other tests were investigated to determine the approximate state of local item independence…
Statistical Profiling of Academic Oral English Proficiency Based on an ITA Screening Test
ERIC Educational Resources Information Center
Choi, Ick Kyu
2013-01-01
At the University of California, Los Angeles, the Test of Oral Proficiency (TOP), an internally developed oral proficiency test, is administered to international teaching assistant (ITA) candidates to ensure an appropriate level of academic oral English proficiency. Test taker performances are rated live by two raters according to four subscales.…
Consistency in statistical moments as a test for bubble cloud clustering.
Weber, Thomas C; Lyons, Anthony P; Bradley, David L
2011-11-01
Frequency dependent measurements of attenuation and/or sound speed through clouds of gas bubbles in liquids are often inverted to find the bubble size distribution and the void fraction of gas. The inversions are often done using an effective medium theory as a forward model under the assumption that the bubble positions are Poisson distributed (i.e., statistically independent). Under circumstances in which single scattering does not adequately describe the pressure field, the assumption of independence in position can yield large errors when clustering is present, leading to errors in the inverted bubble size distribution. It is difficult, however, to determine the existence of clustering in bubble clouds without the use of specialized acoustic or optical imaging equipment. A method is described here in which the existence of bubble clustering can be identified by examining the consistency between the first two statistical moments of multiple frequency acoustic measurements. PMID:22088013
Numerical Model-Reality Intercomparison Tests Using Small-Sample Statistics.
NASA Astrophysics Data System (ADS)
Preisendorfer, Rudolph W.; Barnett, Tim P.
1983-08-01
When a numerical model's representation of a physical field is to be compared with a corresponding real observed field, it is usually the case that the numbers of realizations of model and observed field are relatively small, so that the natural procedure of producing histograms of pertinent statistics of the fields (e.g., means, variances) from the data sets themselves cannot be usually carried out. Also, it is not always safe to adopt assumptions of normality and independence of the data values. This prevents the confident use of classical statistical methods to make significance statements about the success or failure of the model's replication of the data. Here we suggest two techniques of determinable statistical power, in which small samples of spatially extensive physical fields can be made to blossom into workably large samples on which significance decisions can be based. We also introduce some new measures of location, spread and shape of multivariate data sets which may be used in conjunction with the two techniques. The result is a pair of new data intercomparison procedures which we illustrate using GCM simulations of the January sea-level pressure field and regional ocean model simulations of the new-shore velocity field of South America. We include with these procedures a method of determining the spatial and temporal locations of non-random errors between the model and data fields so that models can be improved accordingly.
NASA Technical Reports Server (NTRS)
Hughes, William O.; McNelis, Anne M.
2010-01-01
The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.
Fujita, André; Takahashi, Daniel Y; Patriota, Alexandre G; Sato, João R
2014-12-10
Statistical inference of functional magnetic resonance imaging (fMRI) data is an important tool in neuroscience investigation. One major hypothesis in neuroscience is that the presence or not of a psychiatric disorder can be explained by the differences in how neurons cluster in the brain. Therefore, it is of interest to verify whether the properties of the clusters change between groups of patients and controls. The usual method to show group differences in brain imaging is to carry out a voxel-wise univariate analysis for a difference between the mean group responses using an appropriate test and to assemble the resulting 'significantly different voxels' into clusters, testing again at cluster level. In this approach, of course, the primary voxel-level test is blind to any cluster structure. Direct assessments of differences between groups at the cluster level seem to be missing in brain imaging. For this reason, we introduce a novel non-parametric statistical test called analysis of cluster structure variability (ANOCVA), which statistically tests whether two or more populations are equally clustered. The proposed method allows us to compare the clustering structure of multiple groups simultaneously and also to identify features that contribute to the differential clustering. We illustrate the performance of ANOCVA through simulations and an application to an fMRI dataset composed of children with attention deficit hyperactivity disorder (ADHD) and controls. Results show that there are several differences in the clustering structure of the brain between them. Furthermore, we identify some brain regions previously not described to be involved in the ADHD pathophysiology, generating new hypotheses to be tested. The proposed method is general enough to be applied to other types of datasets, not limited to fMRI, where comparison of clustering structures is of interest. PMID:25185759
ERIC Educational Resources Information Center
Woodruff, David; Wu, Yi-Fang
2012-01-01
The purpose of this paper is to illustrate alpha's robustness and usefulness, using actual and simulated educational test data. The sampling properties of alpha are compared with the sampling properties of several other reliability coefficients: Guttman's lambda[subscript 2], lambda[subscript 4], and lambda[subscript 6]; test-retest reliability;…
Hybrid Statistical Testing for Nuclear Material Accounting Data and/or Process Monitoring Data
Ticknor, Lawrence O.; Hamada, Michael Scott; Sprinkle, James K.; Burr, Thomas Lee
2015-04-14
The two tests employed in the hybrid testing scheme are Page’s cumulative sums for all streams within a Balance Period (maximum of the maximums and average of the maximums) and Crosier’s multivariate cumulative sum applied to incremental cumulative sums across Balance Periods. The role of residuals for both kinds of data is discussed.