Sample records for distribution test test

  1. 10 CFR 431.198 - Enforcement testing for distribution transformers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Enforcement testing for distribution transformers. 431.198... COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Compliance and Enforcement § 431.198 Enforcement testing for distribution transformers. (a) Test notice. Upon receiving information in writing...

  2. 10 CFR 431.198 - Enforcement testing for distribution transformers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Enforcement testing for distribution transformers. 431.198... COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Compliance and Enforcement § 431.198 Enforcement testing for distribution transformers. (a) Test notice. Upon receiving information in writing...

  3. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

  4. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

  5. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

  6. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

  7. 10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Test Procedures § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...

  8. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    PubMed

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  9. 10 CFR Appendix C to Subpart C of... - Sampling Plan for Enforcement Testing of Distribution Transformers

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Sampling Plan for Enforcement Testing of Distribution Transformers C Appendix C to Subpart C of Part 429 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION... Testing of Distribution Transformers (a) When testing distribution transformers, the number of units in...

  10. 10 CFR Appendix C to Subpart C of... - Sampling Plan for Enforcement Testing of Distribution Transformers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Sampling Plan for Enforcement Testing of Distribution Transformers C Appendix C to Subpart C of Part 429 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION... Testing of Distribution Transformers (a) When testing distribution transformers, the number of units in...

  11. 10 CFR Appendix C to Subpart C of... - Sampling Plan for Enforcement Testing of Distribution Transformers

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Sampling Plan for Enforcement Testing of Distribution Transformers C Appendix C to Subpart C of Part 429 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION... Testing of Distribution Transformers (a) When testing distribution transformers, the number of units in...

  12. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  13. 78 FR 64153 - Policy Statement on the Principles for Development and Distribution of Annual Stress Test Scenarios

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-28

    .... OCC-2012-0016] Policy Statement on the Principles for Development and Distribution of Annual Stress... in developing and distributing the stress test scenarios for the annual stress test required by the... by the Annual Stress Test final rule (Stress Test Rule) published on October 9, 2012. Under the...

  14. 77 FR 69553 - Policy Statement on the Principles for Development and Distribution of Annual Stress Test Scenarios

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-20

    ... Development and Distribution of Annual Stress Test Scenarios AGENCY: Federal Deposit Insurance Corporation... distributing the stress test scenarios for the annual stress tests required by the Dodd- Frank Wall Street Reform and Consumer Protection Act of 2010 as implemented by the Annual Stress Test final rule (``Stress...

  15. Using a Social Network Strategy to Distribute HIV Self-Test Kits to African American and Latino MSM.

    PubMed

    Lightfoot, Marguerita A; Campbell, Chadwick K; Moss, Nicholas; Treves-Kagan, Sarah; Agnew, Emily; Kang Dufour, Mi-Suk; Scott, Hyman; Sa'id, Aria M; Lippman, Sheri A

    2018-05-04

    Men who have sex with men (MSM) continue to be disproportionately impacted globally by the HIV epidemic. Studies suggest that HIV Self-testing (HIVST) is highly acceptable among MSM. Social network strategies to increase testing are effective in reaching MSM, particularly MSM of color, who may not otherwise test. We tested a social-network based strategy to distribute HIVST kits to African American and Latino MSM. This study was conducted in Alameda County, California a large, urban/suburban county with an HIV epidemic mirroring the national HIV epidemic. From January 2016 to March 2017, 30 AAMSM, LMSM, and Transgender women were trained as peer recruiters and asked to distribute five self-test kits to MSM social network members and support those who test positive in linking to care. Testers completed an online survey following their test. We compared peer-distributed HIVST testing outcomes to outcomes from Alameda County's targeted, community-based HIV testing programs using chi-squared tests. Peers distributed HIVST to 143 social and sexual network members, of whom 110 completed the online survey. Compared to MSM who utilized the County's sponsored testing programs, individuals reached through the peer-based self-testing strategy were significantly more likely to have never tested for HIV (3.51% vs. 0.41%, p<0.01) and to report a positive test result (6.14% vs 1.49%, p<0.01). Findings suggest that a network-based strategy for self-test distribution is a promising intervention to increase testing uptake and reduce undiagnosed infections among African American and Latino MSM.

  16. Project W-320 acceptance test report for AY-farm electrical distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, R.R.

    1998-04-02

    This Acceptance Test Procedure (ATP) has been prepared to demonstrate that the AY-Farm Electrical Distribution System functions as required by the design criteria. This test is divided into three parts to support the planned construction schedule; Section 8 tests Mini-Power Pane AY102-PPI and the EES; Section 9 tests the SSS support systems; Section 10 tests the SSS and the Multi-Pak Group Control Panel. This test does not include the operation of end-use components (loads) supplied from the distribution system. Tests of the end-use components (loads) will be performed by other W-320 ATPs.

  17. Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan

    2016-01-01

    The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.

  18. A note on the misuses of the variance test in meteorological studies

    NASA Astrophysics Data System (ADS)

    Hazra, Arnab; Bhattacharya, Sourabh; Banik, Pabitra; Bhattacharya, Sabyasachi

    2017-12-01

    Stochastic modeling of rainfall data is an important area in meteorology. The gamma distribution is a widely used probability model for non-zero rainfall. Typically the choice of the distribution for such meteorological studies is based on two goodness-of-fit tests—the Pearson's Chi-square test and the Kolmogorov-Smirnov test. Inspired by the index of dispersion introduced by Fisher (Statistical methods for research workers. Hafner Publishing Company Inc., New York, 1925), Mooley (Mon Weather Rev 101:160-176, 1973) proposed the variance test as a goodness-of-fit measure in this context and a number of researchers have implemented it since then. We show that the asymptotic distribution of the test statistic for the variance test is generally not comparable to any central Chi-square distribution and hence the test is erroneous. We also describe a method for checking the validity of the asymptotic distribution for a class of distributions. We implement the erroneous test on some simulated, as well as real datasets and demonstrate how it leads to some wrong conclusions.

  19. Echo Statistics of Aggregations of Scatterers in a Random Waveguide: Application to Biologic Sonar Clutter

    DTIC Science & Technology

    2012-09-01

    used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,

  20. Estimating Conditional Distributions of Scores on an Alternate Form of a Test. Research Report. ETS RR-15-18

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Chen, Haiwen H.

    2015-01-01

    Quantitative information about test score reliability can be presented in terms of the distribution of equated scores on an alternate form of the test for test takers with a given score on the form taken. In this paper, we describe a procedure for estimating that distribution, for any specified score on the test form taken, by estimating the joint…

  1. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  2. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  3. Adaptive linear rank tests for eQTL studies

    PubMed Central

    Szymczak, Silke; Scheinhardt, Markus O.; Zeller, Tanja; Wild, Philipp S.; Blankenberg, Stefan; Ziegler, Andreas

    2013-01-01

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal–Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. PMID:22933317

  4. Adaptive linear rank tests for eQTL studies.

    PubMed

    Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas

    2013-02-10

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.

  5. A Review of Power Distribution Test Feeders in the United States and the Need for Synthetic Representative Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez

    Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less

  6. A Review of Power Distribution Test Feeders in the United States and the Need for Synthetic Representative Networks

    DOE PAGES

    Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez; ...

    2017-11-18

    Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less

  7. An accurate test for homogeneity of odds ratios based on Cochran's Q-statistic.

    PubMed

    Kulinskaya, Elena; Dollinger, Michael B

    2015-06-10

    A frequently used statistic for testing homogeneity in a meta-analysis of K independent studies is Cochran's Q. For a standard test of homogeneity the Q statistic is referred to a chi-square distribution with K-1 degrees of freedom. For the situation in which the effects of the studies are logarithms of odds ratios, the chi-square distribution is much too conservative for moderate size studies, although it may be asymptotically correct as the individual studies become large. Using a mixture of theoretical results and simulations, we provide formulas to estimate the shape and scale parameters of a gamma distribution to fit the distribution of Q. Simulation studies show that the gamma distribution is a good approximation to the distribution for Q. Use of the gamma distribution instead of the chi-square distribution for Q should eliminate inaccurate inferences in assessing homogeneity in a meta-analysis. (A computer program for implementing this test is provided.) This hypothesis test is competitive with the Breslow-Day test both in accuracy of level and in power.

  8. Path Planning for Reduced Identifiability of Unmanned Surface Vehicles Conducting Intelligence, Surveillance, and Reconnaissance

    DTIC Science & Technology

    2017-05-22

    angular velocity values Figure 33: Feasibility test Figure 34: Bellman’s Principle Figure 35: Bellman’s Principle validation Minimum Figure 36...Distribution of at test point for simulated ISR traffic Figure 48: PDFs of observed and ISR traffic Table 2: Adversary security states at test point #10...Figure 49: Hypothesis testing at test point #10 Figure 50: Distribution of for observed traffic Figure 51: Distribution of for ISR traffic Table 3

  9. Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio

    NASA Astrophysics Data System (ADS)

    Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.

    2017-12-01

    Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.

  10. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  11. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  12. Use of Pearson's Chi-Square for Testing Equality of Percentile Profiles across Multiple Populations.

    PubMed

    Johnson, William D; Beyl, Robbie A; Burton, Jeffrey H; Johnson, Callie M; Romer, Jacob E; Zhang, Lei

    2015-08-01

    In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10 th , 50 th , and 90 th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.

  13. SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Bri-Mathias; Palmintier, Bryan

    This presentation provides an overview of full-scale, high-quality, synthetic distribution system data set(s) for testing distribution automation algorithms, distributed control approaches, ADMS capabilities, and other emerging distribution technologies.

  14. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    PubMed

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  15. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  16. Derivation and Applicability of Asymptotic Results for Multiple Subtests Person-Fit Statistics

    PubMed Central

    Albers, Casper J.; Meijer, Rob R.; Tendeiro, Jorge N.

    2016-01-01

    In high-stakes testing, it is important to check the validity of individual test scores. Although a test may, in general, result in valid test scores for most test takers, for some test takers, test scores may not provide a good description of a test taker’s proficiency level. Person-fit statistics have been proposed to check the validity of individual test scores. In this study, the theoretical asymptotic sampling distribution of two person-fit statistics that can be used for tests that consist of multiple subtests is first discussed. Second, simulation study was conducted to investigate the applicability of this asymptotic theory for tests of finite length, in which the correlation between subtests and number of items in the subtests was varied. The authors showed that these distributions provide reasonable approximations, even for tests consisting of subtests of only 10 items each. These results have practical value because researchers do not have to rely on extensive simulation studies to simulate sampling distributions. PMID:29881053

  17. 78 FR 72534 - Policy Statement on the Principles for Development and Distribution of Annual Stress Test Scenarios

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-03

    ... Development and Distribution of Annual Stress Test Scenarios AGENCY: Federal Deposit Insurance Corporation... (``covered banks'') to conduct annual stress tests, report the results of such stress tests to the... summary of the results of the stress tests. On October 15, 2012, the FDIC published in the Federal...

  18. Test Design and Speededness

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2011-01-01

    A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…

  19. Measuring the Sensitivity of Single-locus “Neutrality Tests” Using a Direct Perturbation Approach

    PubMed Central

    Garrigan, Daniel; Lewontin, Richard; Wakeley, John

    2010-01-01

    A large number of statistical tests have been proposed to detect natural selection based on a sample of variation at a single genetic locus. These tests measure the deviation of the allelic frequency distribution observed within populations from the distribution expected under a set of assumptions that includes both neutral evolution and equilibrium population demography. The present study considers a new way to assess the statistical properties of these tests of selection, by their behavior in response to direct perturbations of the steady-state allelic frequency distribution, unconstrained by any particular nonequilibrium demographic scenario. Results from Monte Carlo computer simulations indicate that most tests of selection are more sensitive to perturbations of the allele frequency distribution that increase the variance in allele frequencies than to perturbations that decrease the variance. Simulations also demonstrate that it requires, on average, 4N generations (N is the diploid effective population size) for tests of selection to relax to their theoretical, steady-state distributions following different perturbations of the allele frequency distribution to its extremes. This relatively long relaxation time highlights the fact that these tests are not robust to violations of the other assumptions of the null model besides neutrality. Lastly, genetic variation arising under an example of a regularly cycling demographic scenario is simulated. Tests of selection performed on this last set of simulated data confirm the confounding nature of these tests for the inference of natural selection, under a demographic scenario that likely holds for many species. The utility of using empirical, genomic distributions of test statistics, instead of the theoretical steady-state distribution, is discussed as an alternative for improving the statistical inference of natural selection. PMID:19744997

  20. Sidewall Mach Number Distributions for the NASA Langley Transonic Dynamics Tunnel

    NASA Technical Reports Server (NTRS)

    Florance, James R.; Rivera, Jose A., Jr.

    2001-01-01

    The Transonic Dynamics Tunnel(TDT) was recalibrated due to the conversion of the heavy gas test medium from R-12 to R-134a. The objectives of the tests were to determine the relationship between the free-stream Mach number and the measured test section Mach number, and to quantify any necessary corrections. Other tests included the measurement of pressure distributions along the test-section walls, test-section centerline, at certain tunnel stations via a rake apparatus, and in the tunnel settling chamber. Wall boundary layer, turbulence, and flow angularity measurements were also performed. This paper discusses the determination of sidewall Mach number distributions.

  1. Space station data management system - A common GSE test interface for systems testing and verification

    NASA Technical Reports Server (NTRS)

    Martinez, Pedro A.; Dunn, Kevin W.

    1987-01-01

    This paper examines the fundamental problems and goals associated with test, verification, and flight-certification of man-rated distributed data systems. First, a summary of the characteristics of modern computer systems that affect the testing process is provided. Then, verification requirements are expressed in terms of an overall test philosophy for distributed computer systems. This test philosophy stems from previous experience that was gained with centralized systems (Apollo and the Space Shuttle), and deals directly with the new problems that verification of distributed systems may present. Finally, a description of potential hardware and software tools to help solve these problems is provided.

  2. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  3. Calibration of the 13- by 13-inch adaptive wall test section for the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Hill, Acquilla S.

    1990-01-01

    A 13 by 13 inch adaptive wall test section was installed in the 0.3 Meter Transonic Cryogenic Tunnel circuit. This new test section is configured for 2-D airfoil testing. It has four solid walls. The top and bottom walls are flexible and movable whereas the sidewalls are rigid and fixed. The wall adaptation strategy employed requires the test section wall shapes associated with uniform test section Mach number distributions. Calibration tests with the test section empty were conducted with the top and bottom walls linearly diverged to approach a uniform Mach number distribution. Pressure distributions were measured in the contraction cone, the test section, and the high speed diffuser at Mach numbers from 0.20 to 0.95 and Reynolds numbers from 10 to 100 x 10 (exp 6)/per foot.

  4. Detection of Person Misfit in Computerized Adaptive Tests with Polytomous Items.

    ERIC Educational Resources Information Center

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    2002-01-01

    Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…

  5. Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, K. P.; Mather, B. A.; Pal, B. C.

    For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less

  6. Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders

    DOE PAGES

    Schneider, K. P.; Mather, B. A.; Pal, B. C.; ...

    2017-10-10

    For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less

  7. Promoting male partner HIV testing and safer sexual decision making through secondary distribution of self-tests by HIV-negative female sex workers and women receiving antenatal and post-partum care in Kenya: a cohort study.

    PubMed

    Thirumurthy, Harsha; Masters, Samuel H; Mavedzenge, Sue Napierala; Maman, Suzanne; Omanga, Eunice; Agot, Kawango

    2016-06-01

    Increased uptake of HIV testing by men in sub-Saharan Africa is essential for the success of combination prevention. Self-testing is an emerging approach with high acceptability, but little evidence exists on the best strategies for test distribution. We assessed an approach of providing multiple self-tests to women at high risk of HIV acquisition to promote partner HIV testing and to facilitate safer sexual decision making. In this cohort study, HIV-negative women aged 18-39 years were recruited at two sites in Kisumu, Kenya: a health facility with antenatal and post-partum clinics and a drop-in centre for female sex workers. Participants gave informed consent and were instructed on use of oral fluid based rapid HIV tests. Participants enrolled at the health facility received three self-tests and those at the drop-in centre received five self-tests. Structured interviews were conducted with participants at enrolment and over 3 months to determine how self-tests were used. Outcomes included the number of self-tests distributed by participants, the proportion of participants whose sexual partners used a self-test, couples testing, and sexual behaviour after self-testing. Between Jan 14, 2015, and March 13, 2015, 280 participants were enrolled (61 in antenatal care, 117 in post-partum care, and 102 female sex workers); follow-up interviews were completed for 265 (96%). Most participants with primary sexual partners distributed self-tests to partners: 53 (91%) of 58 participants in antenatal care, 91 (86%) of 106 in post-partum care, and 64 (75%) of 85 female sex workers. 82 (81%) of 101 female sex workers distributed more than one self-test to commercial sex clients. Among self-tests distributed to and used by primary sexual partners of participants, couples testing occurred in 27 (51%) of 53 in antenatal care, 62 (68%) of 91 from post-partum care, and 53 (83%) of 64 female sex workers. Among tests received by primary and non-primary sexual partners, two (4%) of 53 tests from participants in antenatal care, two (2%) of 91 in post-partum care, and 41 (14%) of 298 from female sex workers had positive results. Participants reported sexual intercourse with 235 (62%) of 380 sexual partners who tested HIV-negative, compared with eight (18%) of 45 who tested HIV-positive (p<0·0001); condoms were used in all eight intercourse events after positive results compared with 104 (44%) after of negative results (p<0·0018). Four participants reported intimate partner violence as a result of self-test distribution: two in the post-partum care group and two female sex workers. No other adverse events were reported. Provision of multiple HIV self-tests to women at high risk of HIV infection was successful in promoting HIV testing among their sexual partners and in facilitating safer sexual decisions. This novel strategy warrants further consideration as countries develop self-testing policies and programmes. Bill & Melinda Gates Foundation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Multi-KW dc distribution system technology research study

    NASA Technical Reports Server (NTRS)

    Dawson, S. G.

    1978-01-01

    The Multi-KW DC Distribution System Technology Research Study is the third phase of the NASA/MSFC study program. The purpose of this contract was to complete the design of the integrated technology test facility, provide test planning, support test operations and evaluate test results. The subjet of this study is a continuation of this contract. The purpose of this continuation is to study and analyze high voltage system safety, to determine optimum voltage levels versus power, to identify power distribution system components which require development for higher voltage systems and finally to determine what modifications must be made to the Power Distribution System Simulator (PDSS) to demonstrate 300 Vdc distribution capability.

  9. A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Colp, S. Mitchell

    2018-01-01

    Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…

  10. A Monte Carlo Approach to Unidimensionality Testing in Polytomous Rasch Models

    ERIC Educational Resources Information Center

    Christensen, Karl Bang; Kreiner, Svend

    2007-01-01

    Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…

  11. Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions from Coarsened Data

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Shear, Benjamin R.; Castellano, Katherine E.; Ho, Andrew D.

    2017-01-01

    Test score distributions of schools or demographic groups are often summarized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups' test score distributions from such data. Because…

  12. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  13. Pinocchio testing in the forensic analysis of waiting lists: using public waiting list data from Finland and Spain for testing Newcomb-Benford’s Law

    PubMed Central

    López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador

    2018-01-01

    Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333

  14. Location tests for biomarker studies: a comparison using simulations for the two-sample case.

    PubMed

    Scheinhardt, M O; Ziegler, A

    2013-01-01

    Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.

  15. Hawaiian Electric Advanced Inverter Test Plan - Result Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson; Nelson, Austin; Prabakar, Kumaraguru

    This presentation is intended to share the results of lab testing of five PV inverters with the Hawaiian Electric Companies and other stakeholders and interested parties. The tests included baseline testing of advanced inverter grid support functions, as well as distribution circuit-level tests to examine the impact of the PV inverters on simulated distribution feeders using power hardware-in-the-loop (PHIL) techniques. hardware-in-the-loop (PHIL) techniques.

  16. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  17. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  18. Model-Based Diagnosis in a Power Distribution Test-Bed

    NASA Technical Reports Server (NTRS)

    Scarl, E.; McCall, K.

    1998-01-01

    The Rodon model-based diagnosis shell was applied to a breadboard test-bed, modeling an automated power distribution system. The constraint-based modeling paradigm and diagnostic algorithm were found to adequately represent the selected set of test scenarios.

  19. Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models

    ERIC Educational Resources Information Center

    Williams, Jason; MacKinnon, David P.

    2008-01-01

    Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…

  20. Approaches of using the beard testing method to obtain complete length distributions of the original samples

    USDA-ARS?s Scientific Manuscript database

    The fiber testing instruments such as HVI can rapidly measure fiber length by testing a tapered fiber beard of the sample. But these instruments that use the beard testing method only report a limited number of fiber length parameters instead of the complete length distribution that is important fo...

  1. 77 FR 68047 - Policy Statement on the Principles for Development and Distribution of Annual Stress Test Scenarios

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-15

    .... OCC-2012-0016] Policy Statement on the Principles for Development and Distribution of Annual Stress... and factors to be used by the OCC in development and distributing the stress test scenarios for the annual stress test required by the Dodd- Frank Wall Street Reform and Consumer Protection Act of 2010 as...

  2. Vector wind profile gust model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1981-01-01

    To enable development of a vector wind gust model suitable for orbital flight test operations and trade studies, hypotheses concerning the distributions of gust component variables were verified. Methods for verification of hypotheses that observed gust variables, including gust component magnitude, gust length, u range, and L range, are gamma distributed and presented. Observed gust modulus has been drawn from a bivariate gamma distribution that can be approximated with a Weibull distribution. Zonal and meridional gust components are bivariate gamma distributed. An analytical method for testing for bivariate gamma distributed variables is presented. Two distributions for gust modulus are described and the results of extensive hypothesis testing of one of the distributions are presented. The validity of the gamma distribution for representation of gust component variables is established.

  3. Pinocchio testing in the forensic analysis of waiting lists: using public waiting list data from Finland and Spain for testing Newcomb-Benford's Law.

    PubMed

    Pinilla, Jaime; López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador

    2018-05-09

    Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ 2 , mean absolute deviation and Kuiper tests. Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ 2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ 2 test). Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Reliability-based econometrics of aerospace structural systems: Design criteria and test options. Ph.D. Thesis - Georgia Inst. of Tech.

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1974-01-01

    The design criteria and test options for aerospace structural reliability were investigated. A decision methodology was developed for selecting a combination of structural tests and structural design factors. The decision method involves the use of Bayesian statistics and statistical decision theory. Procedures are discussed for obtaining and updating data-based probabilistic strength distributions for aerospace structures when test information is available and for obtaining subjective distributions when data are not available. The techniques used in developing the distributions are explained.

  5. Solvent Replacement for Super Corr-A Corrosion Preventive Compound (CPC)

    DTIC Science & Technology

    2011-08-18

    AFCPCO BUSINESS SENSITIVE 8 First Article Testing Results Insert table referred to in interim report ? Requirement Test Method Specification...Vertical BUSINESS SENSITIVE 23 Conclusions • No tested lubricants met all first article testing requirements • DuPont Vertrel SDG and Kyzen Cybersolv...Paul Hoth Battelle Hill AFB Distribution Statement A Approved for public release; distribution is unlimited Report Documentation Page

  6. Geographically distributed hybrid testing & collaboration between geotechnical centrifuge and structures laboratories

    NASA Astrophysics Data System (ADS)

    Ojaghi, Mobin; Martínez, Ignacio Lamata; Dietz, Matt S.; Williams, Martin S.; Blakeborough, Anthony; Crewe, Adam J.; Taylor, Colin A.; Madabhushi, S. P. Gopal; Haigh, Stuart K.

    2018-01-01

    Distributed Hybrid Testing (DHT) is an experimental technique designed to capitalise on advances in modern networking infrastructure to overcome traditional laboratory capacity limitations. By coupling the heterogeneous test apparatus and computational resources of geographically distributed laboratories, DHT provides the means to take on complex, multi-disciplinary challenges with new forms of communication and collaboration. To introduce the opportunity and practicability afforded by DHT, here an exemplar multi-site test is addressed in which a dedicated fibre network and suite of custom software is used to connect the geotechnical centrifuge at the University of Cambridge with a variety of structural dynamics loading apparatus at the University of Oxford and the University of Bristol. While centrifuge time-scaling prevents real-time rates of loading in this test, such experiments may be used to gain valuable insights into physical phenomena, test procedure and accuracy. These and other related experiments have led to the development of the real-time DHT technique and the creation of a flexible framework that aims to facilitate future distributed tests within the UK and beyond. As a further example, a real-time DHT experiment between structural labs using this framework for testing across the Internet is also presented.

  7. Experimental evaluation of wall Mach number distributions of the octagonal test section proposed for NASA Lewis Research Center's altitude wind tunnel

    NASA Technical Reports Server (NTRS)

    Harrington, Douglas E.; Burley, Richard R.; Corban, Robert R.

    1986-01-01

    Wall Mach number distributions were determined over a range of test-section free-stream Mach numbers from 0.2 to 0.92. The test section was slotted and had a nominal porosity of 11 percent. Reentry flaps located at the test-section exit were varied from 0 (fully closed) to 9 (fully open) degrees. Flow was bled through the test-section slots by means of a plenum evacuation system (PES) and varied from 0 to 3 percent of tunnel flow. Variations in reentry flap angle or PES flow rate had little or no effect on the Mach number distributions in the first 70 percent of the test section. However, in the aft region of the test section, flap angle and PES flow rate had a major impact on the Mach number distributions. Optimum PES flow rates were nominally 2 to 2.5 percent wtih the flaps fully closed and less than 1 percent when the flaps were fully open. The standard deviation of the test-section wall Mach numbers at the optimum PES flow rates was 0.003 or less.

  8. Testing the anisotropy in the angular distribution of Fermi/GBM gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Tarnopolski, M.

    2017-12-01

    Gamma-ray bursts (GRBs) were confirmed to be of extragalactic origin due to their isotropic angular distribution, combined with the fact that they exhibited an intensity distribution that deviated strongly from the -3/2 power law. This finding was later confirmed with the first redshift, equal to at least z = 0.835, measured for GRB970508. Despite this result, the data from CGRO/BATSE and Swift/BAT indicate that long GRBs are indeed distributed isotropically, but the distribution of short GRBs is anisotropic. Fermi/GBM has detected 1669 GRBs up to date, and their sky distribution is examined in this paper. A number of statistical tests are applied: nearest neighbour analysis, fractal dimension, dipole and quadrupole moments of the distribution function decomposed into spherical harmonics, binomial test and the two-point angular correlation function. Monte Carlo benchmark testing of each test is performed in order to evaluate its reliability. It is found that short GRBs are distributed anisotropically in the sky, and long ones have an isotropic distribution. The probability that these results are not a chance occurrence is equal to at least 99.98 per cent and 30.68 per cent for short and long GRBs, respectively. The cosmological context of this finding and its relation to large-scale structures is discussed.

  9. Materials Science Research Rack-1 Fire Suppressant Distribution Test Report

    NASA Technical Reports Server (NTRS)

    Wieland, P. O.

    2002-01-01

    Fire suppressant distribution testing was performed on the Materials Science Research Rack-1 (MSRR-1), a furnace facility payload that will be installed in the U.S. Lab module of the International Space Station. Unlike racks that were tested previously, the MSRR-1 uses the Active Rack Isolation System (ARIS) to reduce vibration on experiments, so the effects of ARIS on fire suppressant distribution were unknown. Two tests were performed to map the distribution of CO2 fire suppressant throughout a mockup of the MSRR-1 designed to have the same component volumes and flowpath restrictions as the flight rack. For the first test, the average maximum CO2 concentration for the rack was 60 percent, achieved within 45 s of discharge initiation, meeting the requirement to reach 50 percent throughout the rack within 1 min. For the second test, one of the experiment mockups was removed to provide a worst-case configuration, and the average maximum CO2 concentration for the rack was 58 percent. Comparing the results of this testing with results from previous testing leads to several general conclusions that can be used to evaluate future racks. The MSRR-1 will meet the requirements for fire suppressant distribution. Primary factors that affect the ability to meet the CO2 distribution requirements are the free air volume in the rack and the total area and distribution of openings in the rack shell. The length of the suppressant flowpath and degree of tortuousness has little correlation with CO2 concentration. The total area of holes in the rack shell could be significantly increased. The free air volume could be significantly increased. To ensure the highest maximum CO2 concentration, the PFE nozzle should be inserted to the stop on the nozzle.

  10. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  11. 242A Distributed Control System Year 2000 Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TEATS, M.C.

    1999-08-31

    This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct yearmore » 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.« less

  12. Efficient computation of significance levels for multiple associations in large studies of correlated data, including genomewide association studies.

    PubMed

    Dudbridge, Frank; Koeleman, Bobby P C

    2004-09-01

    Large exploratory studies, including candidate-gene-association testing, genomewide linkage-disequilibrium scans, and array-expression experiments, are becoming increasingly common. A serious problem for such studies is that statistical power is compromised by the need to control the false-positive rate for a large family of tests. Because multiple true associations are anticipated, methods have been proposed that combine evidence from the most significant tests, as a more powerful alternative to individually adjusted tests. The practical application of these methods is currently limited by a reliance on permutation testing to account for the correlated nature of single-nucleotide polymorphism (SNP)-association data. On a genomewide scale, this is both very time-consuming and impractical for repeated explorations with standard marker panels. Here, we alleviate these problems by fitting analytic distributions to the empirical distribution of combined evidence. We fit extreme-value distributions for fixed lengths of combined evidence and a beta distribution for the most significant length. An initial phase of permutation sampling is required to fit these distributions, but it can be completed more quickly than a simple permutation test and need be done only once for each panel of tests, after which the fitted parameters give a reusable calibration of the panel. Our approach is also a more efficient alternative to a standard permutation test. We demonstrate the accuracy of our approach and compare its efficiency with that of permutation tests on genomewide SNP data released by the International HapMap Consortium. The estimation of analytic distributions for combined evidence will allow these powerful methods to be applied more widely in large exploratory studies.

  13. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers

    PubMed Central

    Han, Buhm; Kang, Hyun Min; Eskin, Eleazar

    2009-01-01

    With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255

  14. Single-Cycle Versus Multicycle Proof Testing

    NASA Technical Reports Server (NTRS)

    Hudak, S. J., Jr.; Mcclung, R. C.; Bartlett, M. L.; Fitzgerald, J. H.; Russell, D. A.

    1992-01-01

    Report compares single-cycle with multiple-cycle mechanical-stress tests of parts under mechanical stresses. Objective of proof testing: to screen out gross manufacturing or material deficiencies and provide additional assurance of quality. Report concludes that changes in distribution of crack sizes during multicycle proof testing depend on initial distribution, number of cycles, relationship between resistance of material and elastic/plastic fracture-mechanics parameter, relationship between load control and displacement control, and magnitude of applied load or displacement. Whether single-cycle or multicycle testing used depends on shape, material, and technique of fabrication of components tested.

  15. Efficient Blockwise Permutation Tests Preserving Exchangeability

    PubMed Central

    Zhou, Chunxiao; Zwilling, Chris E.; Calhoun, Vince D.; Wang, Michelle Y.

    2014-01-01

    In this paper, we present a new blockwise permutation test approach based on the moments of the test statistic. The method is of importance to neuroimaging studies. In order to preserve the exchangeability condition required in permutation tests, we divide the entire set of data into certain exchangeability blocks. In addition, computationally efficient moments-based permutation tests are performed by approximating the permutation distribution of the test statistic with the Pearson distribution series. This involves the calculation of the first four moments of the permutation distribution within each block and then over the entire set of data. The accuracy and efficiency of the proposed method are demonstrated through simulated experiment on the magnetic resonance imaging (MRI) brain data, specifically the multi-site voxel-based morphometry analysis from structural MRI (sMRI). PMID:25289113

  16. Calculating p-values and their significances with the Energy Test for large datasets

    NASA Astrophysics Data System (ADS)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  17. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  18. Combined Loads Test Fixture for Thermal-Structural Testing Aerospace Vehicle Panel Concepts

    NASA Technical Reports Server (NTRS)

    Fields, Roger A.; Richards, W. Lance; DeAngelis, Michael V.

    2004-01-01

    A structural test requirement of the National Aero-Space Plane (NASP) program has resulted in the design, fabrication, and implementation of a combined loads test fixture. Principal requirements for the fixture are testing a 4- by 4-ft hat-stiffened panel with combined axial (either tension or compression) and shear load at temperatures ranging from room temperature to 915 F, keeping the test panel stresses caused by the mechanical loads uniform, and thermal stresses caused by non-uniform panel temperatures minimized. The panel represents the side fuselage skin of an experimental aerospace vehicle, and was produced for the NASP program. A comprehensive mechanical loads test program using the new test fixture has been conducted on this panel from room temperature to 500 F. Measured data have been compared with finite-element analyses predictions, verifying that uniform load distributions were achieved by the fixture. The overall correlation of test data with analysis is excellent. The panel stress distributions and temperature distributions are very uniform and fulfill program requirements. This report provides details of an analytical and experimental validation of the combined loads test fixture. Because of its simple design, this unique test fixture can accommodate panels from a variety of aerospace vehicle designs.

  19. Distributed analysis functional testing using GangaRobot in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Legger, Federica; ATLAS Collaboration

    2011-12-01

    Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.

  20. Determining irrigation distribution uniformity and efficiency for nurseries

    Treesearch

    R. Thomas Fernandez

    2010-01-01

    A simple method for testing the distribution uniformity of overhead irrigation systems is described. The procedure is described step-by-step along with an example. Other uses of distribution uniformity testing are presented, as well as common situations that affect distribution uniformity and how to alleviate them.

  1. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  2. Estimating the proportion of true null hypotheses when the statistics are discrete.

    PubMed

    Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S

    2015-07-15

    In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  4. Biodiesel Test Plan

    DTIC Science & Technology

    2014-07-01

    Biodiesel Test Plan Distribution Statement A: Approved for Public Release; distribution is unlimited. July 2014 Report No. CG-D-07-14...Appendix C) Biodiesel Test Plan ii UNCLAS//Public | CG-926 R&DC | G. W. Johnson, et al. Public | July 2014 N O T I C E This...Development Center 1 Chelsea Street New London, CT 06320 Biodiesel Test Plan iii UNCLAS//Public | CG-926 R&DC | G. W. Johnson, et al

  5. An environmental testing facility for Space Station Freedom power management and distribution hardware

    NASA Technical Reports Server (NTRS)

    Jackola, Arthur S.; Hartjen, Gary L.

    1992-01-01

    The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.

  6. A General Class of Signed Rank Tests for Clustered Data when the Cluster Size is Potentially Informative

    PubMed Central

    Datta, Somnath; Nevalainen, Jaakko; Oja, Hannu

    2012-01-01

    SUMMARY Rank based tests are alternatives to likelihood based tests popularized by their relative robustness and underlying elegant mathematical theory. There has been a serge in research activities in this area in recent years since a number of researchers are working to develop and extend rank based procedures to clustered dependent data which include situations with known correlation structures (e.g., as in mixed effects models) as well as more general form of dependence. The purpose of this paper is to test the symmetry of a marginal distribution under clustered data. However, unlike most other papers in the area, we consider the possibility that the cluster size is a random variable whose distribution is dependent on the distribution of the variable of interest within a cluster. This situation typically arises when the clusters are defined in a natural way (e.g., not controlled by the experimenter or statistician) and in which the size of the cluster may carry information about the distribution of data values within a cluster. Under the scenario of an informative cluster size, attempts to use some form of variance adjusted sign or signed rank tests would fail since they would not maintain the correct size under the distribution of marginal symmetry. To overcome this difficulty Datta and Satten (2008; Biometrics, 64, 501–507) proposed a Wilcoxon type signed rank test based on the principle of within cluster resampling. In this paper we study this problem in more generality by introducing a class of valid tests employing a general score function. Asymptotic null distribution of these tests is obtained. A simulation study shows that a more general choice of the score function can sometimes result in greater power than the Datta and Satten test; furthermore, this development offers the user a wider choice. We illustrate our tests using a real data example on spinal cord injury patients. PMID:23074359

  7. A General Class of Signed Rank Tests for Clustered Data when the Cluster Size is Potentially Informative.

    PubMed

    Datta, Somnath; Nevalainen, Jaakko; Oja, Hannu

    2012-09-01

    Rank based tests are alternatives to likelihood based tests popularized by their relative robustness and underlying elegant mathematical theory. There has been a serge in research activities in this area in recent years since a number of researchers are working to develop and extend rank based procedures to clustered dependent data which include situations with known correlation structures (e.g., as in mixed effects models) as well as more general form of dependence.The purpose of this paper is to test the symmetry of a marginal distribution under clustered data. However, unlike most other papers in the area, we consider the possibility that the cluster size is a random variable whose distribution is dependent on the distribution of the variable of interest within a cluster. This situation typically arises when the clusters are defined in a natural way (e.g., not controlled by the experimenter or statistician) and in which the size of the cluster may carry information about the distribution of data values within a cluster.Under the scenario of an informative cluster size, attempts to use some form of variance adjusted sign or signed rank tests would fail since they would not maintain the correct size under the distribution of marginal symmetry. To overcome this difficulty Datta and Satten (2008; Biometrics, 64, 501-507) proposed a Wilcoxon type signed rank test based on the principle of within cluster resampling. In this paper we study this problem in more generality by introducing a class of valid tests employing a general score function. Asymptotic null distribution of these tests is obtained. A simulation study shows that a more general choice of the score function can sometimes result in greater power than the Datta and Satten test; furthermore, this development offers the user a wider choice. We illustrate our tests using a real data example on spinal cord injury patients.

  8. Wilcoxon's signed-rank statistic: what null hypothesis and why it matters.

    PubMed

    Li, Heng; Johnson, Terri

    2014-01-01

    In statistical literature, the term 'signed-rank test' (or 'Wilcoxon signed-rank test') has been used to refer to two distinct tests: a test for symmetry of distribution and a test for the median of a symmetric distribution, sharing a common test statistic. To avoid potential ambiguity, we propose to refer to those two tests by different names, as 'test for symmetry based on signed-rank statistic' and 'test for median based on signed-rank statistic', respectively. The utility of such terminological differentiation should become evident through our discussion of how those tests connect and contrast with sign test and one-sample t-test. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  9. Probabilistic thermal-shock strength testing using infrared imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wereszczak, A.A.; Scheidt, R.A.; Ferber, M.K.

    1999-12-01

    A thermal-shock strength-testing technique has been developed that uses a high-resolution, high-temperature infrared camera to capture a specimen's surface temperature distribution at fracture. Aluminum nitride (AlN) substrates are thermally shocked to fracture to demonstrate the technique. The surface temperature distribution for each test and AlN's thermal expansion are used as input in a finite-element model to determine the thermal-shock strength for each specimen. An uncensored thermal-shock strength Weibull distribution is then determined. The test and analysis algorithm show promise as a means to characterize thermal shock strength of ceramic materials.

  10. Benford's law first significant digit and distribution distances for testing the reliability of financial reports in developing countries

    NASA Astrophysics Data System (ADS)

    Shi, Jing; Ausloos, Marcel; Zhu, Tingting

    2018-02-01

    We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.

  11. Modification of Kolmogorov-Smirnov test for DNA content data analysis through distribution alignment.

    PubMed

    Huang, Shuguang; Yeo, Adeline A; Li, Shuyu Dan

    2007-10-01

    The Kolmogorov-Smirnov (K-S) test is a statistical method often used for comparing two distributions. In high-throughput screening (HTS) studies, such distributions usually arise from the phenotype of independent cell populations. However, the K-S test has been criticized for being overly sensitive in applications, and it often detects a statistically significant difference that is not biologically meaningful. One major reason is that there is a common phenomenon in HTS studies that systematic drifting exists among the distributions due to reasons such as instrument variation, plate edge effect, accidental difference in sample handling, etc. In particular, in high-content cellular imaging experiments, the location shift could be dramatic since some compounds themselves are fluorescent. This oversensitivity of the K-S test is particularly overpowered in cellular assays where the sample sizes are very big (usually several thousands). In this paper, a modified K-S test is proposed to deal with the nonspecific location-shift problem in HTS studies. Specifically, we propose that the distributions are "normalized" by density curve alignment before the K-S test is conducted. In applications to simulation data and real experimental data, the results show that the proposed method has improved specificity.

  12. IEEE 342 Node Low Voltage Networked Test System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin P.; Phanivong, Phillippe K.; Lacroix, Jean-Sebastian

    The IEEE Distribution Test Feeders provide a benchmark for new algorithms to the distribution analyses community. The low voltage network test feeder represents a moderate size urban system that is unbalanced and highly networked. This is the first distribution test feeder developed by the IEEE that contains unbalanced networked components. The 342 node Low Voltage Networked Test System includes many elements that may be found in a networked system: multiple 13.2kV primary feeders, network protectors, a 120/208V grid network, and multiple 277/480V spot networks. This paper presents a brief review of the history of low voltage networks and how theymore » evolved into the modern systems. This paper will then present a description of the 342 Node IEEE Low Voltage Network Test System and power flow results.« less

  13. Biostatistics Series Module 3: Comparing Groups: Numerical Variables.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.

  14. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  15. Tests for informative cluster size using a novel balanced bootstrap scheme.

    PubMed

    Nevalainen, Jaakko; Oja, Hannu; Datta, Somnath

    2017-07-20

    Clustered data are often encountered in biomedical studies, and to date, a number of approaches have been proposed to analyze such data. However, the phenomenon of informative cluster size (ICS) is a challenging problem, and its presence has an impact on the choice of a correct analysis methodology. For example, Dutta and Datta (2015, Biometrics) presented a number of marginal distributions that could be tested. Depending on the nature and degree of informativeness of the cluster size, these marginal distributions may differ, as do the choices of the appropriate test. In particular, they applied their new test to a periodontal data set where the plausibility of the informativeness was mentioned, but no formal test for the same was conducted. We propose bootstrap tests for testing the presence of ICS. A balanced bootstrap method is developed to successfully estimate the null distribution by merging the re-sampled observations with closely matching counterparts. Relying on the assumption of exchangeability within clusters, the proposed procedure performs well in simulations even with a small number of clusters, at different distributions and against different alternative hypotheses, thus making it an omnibus test. We also explain how to extend the ICS test to a regression setting and thereby enhancing its practical utility. The methodologies are illustrated using the periodontal data set mentioned earlier. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Distributions of Characteristic Roots in Multivariate Analysis

    DTIC Science & Technology

    1976-07-01

    stiidied by various authors, have been briefly discussed. Such distributional ies of four test criteria and a few less important ones which are...functions h. -nots have further been discussed in view of the power comparisons made in co. ion wich tests of three multivariate hypotheses. In addition...one- sample case has also been considered in terms of distributional aspects of the ch. roots and criteria for tests of two hypotheses on the

  17. Outlier detection in a new half-circular distribution

    NASA Astrophysics Data System (ADS)

    Rambli, Adzhar; Mohamed, Ibrahim Bin; Shimizu, Kunio; Khalidin, Nurliza

    2015-10-01

    In this paper, we use a discordancy test based on spacing theory to detect outlier in a half-circular data. Up to now, numerous discordancy tests have been proposed to detect outlier in circular distributions which are defined in [0,2π). However, some circular data lie within just half of this range. Therefore, first we introduce a new half-circular distribution developed using the inverse stereographic projection technique on a gamma distributed variable. Then, we develop a new discordancy test to detect single or multiple outliers in the half-circular data based on the spacing theory. We show the practical value of the test by applying it to an eye data set obtained from a glaucoma clinic at the University of Malaya Medical Centre, Malaysia.

  18. Model-Driven Test Generation of Distributed Systems

    NASA Technical Reports Server (NTRS)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  19. Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★

    NASA Astrophysics Data System (ADS)

    Janczura, J.; Weron, R.

    2012-05-01

    We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.

  20. Central Inertial and GPS Test Facility (CIGTF) Customer Handbook

    DTIC Science & Technology

    2007-08-01

    capabilities offer the customer a cost-effective means to evaluate their guidance and navigation systems. The 746 TS also manages the tri-service GPS...minimum your test manager ur test or current phase of testing is complete. 4.0 Customer Feedback The 746 TS works very hard to provide its customers ... Customer Handbook August 2007 HOLLOMAN AFB, NEW MEXICO Distribution Statement A Approved for public release: distribution is

  1. Final Environmental Assessment for Low-Level Flight Testing, Evaluation, and Training, Edwards Air Force Base

    DTIC Science & Technology

    2005-05-01

    4. TITLE AND SUBTITLE Final Environmental Assessment for Low-Level Flight Testing, Evaluation, and Training, Edwards Air Force Base 5a. CONTRACT...NAME(S) AND ADDRESS(ES) Air Force Flight Test Center,Environmental Management Directorate,Edwards AFB,CA,93524 8. PERFORMING ORGANIZATION REPORT...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The U.S. Air Force Flight Test

  2. A Performance Comparison on the Probability Plot Correlation Coefficient Test using Several Plotting Positions for GEV Distribution.

    NASA Astrophysics Data System (ADS)

    Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng

    2014-05-01

    It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  3. Distributed practice. The more the merrier? A randomised bronchoscopy simulation study.

    PubMed

    Bjerrum, Anne Sofie; Eika, Berit; Charles, Peder; Hilberg, Ole

    2016-01-01

    The distribution of practice affects the acquisition of skills. Distributed practice has shown to be more effective for skills acquisition than massed training. However, it remains unknown as to which is the most effective distributed practice schedule for learning bronchoscopy skills through simulation training. This study compares two distributed practice schedules: One-day distributed practice and weekly distributed practice. Twenty physicians in training were randomly assigned to one-day distributed or weekly distributed bronchoscopy simulation practice. Performance was assessed with a pre-test, a post-test after each practice session, and a 4-week retention test using previously validated simulator measures. Data were analysed with repeated measures ANOVA. No interaction was found between group and test (F(4,72) <1.68, p>0.16), except for the measure 'percent-segments-entered', and no main effect of group was found for any of the measures (F(1,72)< 0.87, p>0.36), which indicates that there was no difference between the learning curves of the one-day distributed practice schedule and the weekly distributed practice schedule. We found no difference in effectiveness of bronchoscopy skills acquisition between the one-day distributed practice and the weekly distributed practice. This finding suggests that the choice of bronchoscopy training practice may be guided by what best suits the clinical practice.

  4. Identifying Variations in Hydraulic Conductivity on the East River at Crested Butte, CO

    NASA Astrophysics Data System (ADS)

    Ulmer, K. N.; Malenda, H. F.; Singha, K.

    2016-12-01

    Slug tests are a widely used method to measure saturated hydraulic conductivity, or how easily water flows through an aquifer, by perturbing the piezometric surface and measuring the time the local groundwater table takes to re-equilibrate. Saturated hydraulic conductivity is crucial to calculating the speed and direction of groundwater movement. Therefore, it is important to document data variance from in situ slug tests. This study addresses two potential sources of data variability: different users and different types of slug used. To test for user variability, two individuals slugged the same six wells with water multiple times at a stream meander on the East River near Crested Butte, CO. To test for variations in type of slug test, multiple water and metal slug tests were performed at a single well in the same meander. The distributions of hydraulic conductivities of each test were then tested for variance using both the Kruskal-Wallis test and the Brown-Forsythe test. When comparing the hydraulic conductivity distributions gathered by the two individuals, we found that they were statistically similar. However, we found that the two types of slug tests produced hydraulic conductivity distributions for the same well that are statistically dissimilar. In conclusion, multiple people should be able to conduct slug tests without creating any considerable variations in the resulting hydraulic conductivity values, but only a single type of slug should be used for those tests.

  5. The Gumbel hypothesis test for left censored observations using regional earthquake records as an example

    NASA Astrophysics Data System (ADS)

    Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.

    2011-01-01

    Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.

  6. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  7. Testicular distribution and toxicity of a novel LTA4H inhibitor in rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, P.D., E-mail: pward4@its.jnj.com; La, D.

    JNJ 40929837, a novel leukotriene A4 hydrolase inhibitor in drug development, was reported to induce testicular toxicity in rats. The mechanism of toxicity was considered to be rodent specific and not relevant to humans. To further investigate this finding in rats, the distribution and toxicokinetics of JNJ 40929837 and its two metabolites, M1 and M2, were investigated. A quantitative whole body autoradiography study showed preferential distribution and retention of JNJ 40929837-derived radioactivity in the testes consistent with the observed site of toxicity. Subsequent studies with unlabeled JNJ 40929837 showed different metabolite profiles between the plasma and testes. Following a singlemore » oral 50 mg/kg dose of JNJ 40929837, M2 was the primary metabolite in plasma whereas M1 was the primary metabolite in testes. The exposure of M1 was 386-fold higher in the testes compared to plasma whereas M2 had limited exposure in testes. Furthermore, the T{sub max} of M1 was 48 h in testes suggesting a large accumulation potential of this metabolite in testes compared to plasma. Following six months of repeated daily oral dosing, M1 accumulated approximately five-fold in the testes whereas the parent did not accumulate. These results indicate that the toxicokinetic profiles of JNJ 40929837 and its two metabolites in testes are markedly different compared to plasma and support the importance of understanding the toxicokinetic profiles of compounds and their metabolites in organs/tissues where toxicity is observed. - Highlights: • JNJ 40929837-derived radioactivity preferentially distributed into testes • Primary metabolite flip-flop in plasma and testes • The primary metabolite in testes accumulated 5-fold but not parent.« less

  8. Comparison of Accuracy Between 13C- and 14C-Urea Breath Testing: Is an Indeterminate-Results Category Still Needed?

    PubMed

    Charest, Mathieu; Bélair, Marc-André

    2017-06-01

    Helicobacter pylori infection is the leading cause of peptic ulcer disease. The purpose of this study was, first, to assess the difference in the distribution of negative versus positive results between the older 14 C-urea breath test and the newer 13 C-urea breath test and, second, to determine whether use of an indeterminate-results category is still meaningful and what type of results should trigger repeated testing. Methods: A retrospective survey was performed of all consecutive patients referred to our service for urea breath testing. We analyzed 562 patients who had undergone testing with 14 C-urea and 454 patients who had undergone testing with 13 C-urea. Results: In comparison with the wide distribution of negative 14 C results, negative 13 C results were distributed farther from the cutoff and were grouped more tightly around the mean negative value. Distribution analysis of the negative results for 13 C testing, compared with those for 14 C testing, revealed a statistically significant difference between the two. Within the 13 C group, only 1 patient could have been classified as having indeterminate results using the same indeterminate zone as was used for the 14 C group. This is significantly less frequent than what was found for the 14 C group. Discussion: Borderline-negative results do occur with 13 C-urea breath testing, although less frequently than with 14 C-urea breath testing, and we will be carefully monitoring differences falling between 3.0 and 3.5 %Δ. 13 C-urea breath testing is safe and simple for the patient and, in most cases, provides clearer positive or negative results for the clinician. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  9. EPA flow reference method testing and analysis: Data report -- Pennsylvania Electric Company, G.P.U. Genco Homer City Station. Volume 1: Test description and appendix A (data distribution package)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-09-01

    This report describes the test site, equipment, and procedures and presents the data obtained during field testing at G.P.U. Genco Homer City Station, August 19--24, 1997. This was the third of three field tests that the US Environmental Protection Agency (EPA) conducted in 1997 as part of a major study to evaluate potential improvements to Method 3, EPA`s test method for measuring flue gas volumetric flow in stacks. The report also includes a Data Distribution Package, the official, complete repository of the results obtained at the test site.

  10. Aerodynamic performance and pressure distributions for a NASA SC(2)-0714 airfoil tested in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Jenkins, Renaldo V.; Hill, Acquilla S.; Ray, Edward J.

    1988-01-01

    This report presents in graphic and tabular forms the aerodynamic coefficient and surface pressure distribution data for a NASA SC(2)-0714 airfoil tested in the Langley 0.3-Meter Transonic Cryogenic Tunnel. The test was another in a series of tests involved in the joint NASA/U.S. Industry Advanced Technology Airfoil Tests program. This 14% thick supercritical airfoil was tested at Mach numbers from 0.6 to 0.76 and angles of attack from -2.0 to 6.0 degrees. The test Reynolds numbers were 4 million, 6 million, 10 million, 15 million, 30 million, 40 million, and 45 million.

  11. Developing a Hypothetical Learning Trajectory for the Sampling Distribution of the Sample Means

    NASA Astrophysics Data System (ADS)

    Syafriandi

    2018-04-01

    Special types of probability distribution are sampling distributions that are important in hypothesis testing. The concept of a sampling distribution may well be the key concept in understanding how inferential procedures work. In this paper, we will design a hypothetical learning trajectory (HLT) for the sampling distribution of the sample mean, and we will discuss how the sampling distribution is used in hypothesis testing.

  12. Real-time high speed generator system emulation with hardware-in-the-loop application

    NASA Astrophysics Data System (ADS)

    Stroupe, Nicholas

    The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.

  13. Notes on power of normality tests of error terms in regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Střelec, Luboš

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less

  14. The Space Station Module Power Management and Distribution automation test bed

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.

    1991-01-01

    The Space Station Module Power Management And Distribution (SSM/PMAD) automation test bed project was begun at NASA/Marshall Space Flight Center (MSFC) in the mid-1980s to develop an autonomous, user-supportive power management and distribution test bed simulating the Space Station Freedom Hab/Lab modules. As the test bed has matured, many new technologies and projects have been added. The author focuses on three primary areas. The first area is the overall accomplishments of the test bed itself. These include a much-improved user interface, a more efficient expert system scheduler, improved communication among the three expert systems, and initial work on adding intermediate levels of autonomy. The second area is the addition of a more realistic power source to the SSM/PMAD test bed; this project is called the Large Autonomous Spacecraft Electrical Power System (LASEPS). The third area is the completion of a virtual link between the SSM/PMAD test bed at MSFC and the Autonomous Power Expert at Lewis Research Center.

  15. Using R to Simulate Permutation Distributions for Some Elementary Experimental Designs

    ERIC Educational Resources Information Center

    Eudey, T. Lynn; Kerr, Joshua D.; Trumbo, Bruce E.

    2010-01-01

    Null distributions of permutation tests for two-sample, paired, and block designs are simulated using the R statistical programming language. For each design and type of data, permutation tests are compared with standard normal-theory and nonparametric tests. These examples (often using real data) provide for classroom discussion use of metrics…

  16. Pressure distributions from high Reynolds number transonic tests of an NACA 0012 airfoil in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Ladson, Charles L.; Hill, Acquilla S.; Johnson, William G., Jr.

    1987-01-01

    Tests were conducted in the 2-D test section of the Langley 0.3-meter Transonic Cryogenic Tunnel on a NACA 0012 airfoil to obtain aerodynamic data as a part of the Advanced Technology Airfoil Test (ATAT) program. The test program covered a Mach number range of 0.30 to 0.82 and a Reynolds number range of 3.0 to 45.0 x 10 to the 6th power. The stagnation pressure was varied between 1.2 and 6.0 atmospheres and the stagnation temperature was varied between 300 K and 90 K to obtain these test conditions. Tabulated pressure distributions and integrated force and moment coefficients are presented as well as plots of the surface pressure distributions. The data are presented uncorrected for wall interference effects and without analysis.

  17. Comparison of theoretical and flight-measured local flow aerodynamics for a low-aspect-ratio fin

    NASA Technical Reports Server (NTRS)

    Johnson, J. B.; Sandlin, D. R.

    1984-01-01

    Flight test and theoretical aerodynamic data were obtained for a flight test fixture mounted on the underside of an F-104G aircraft. The theoretical data were generated using two codes, a two dimensional transonic code called Code H, and a three dimensional subsonic and supersonic code call wing-body. Pressure distributions generated by the codes for the flight test fixture as well as boundary layer displacement thickness generated by the two dimensional code were compared to the flight test data. The two dimensional code pressure distributions compared well except at the minimum pressure point and trailing edge. Shock locations compared well except at high transonic speeds. The three dimensional code pressure distributions compared well except at the trailing edge of the flight test fixture. The two dimensional code does not predict displacement thickness of the flight test fixture well.

  18. TTCN-3 Based Conformance Testing of Mobile Broadcast Business Management System in 3G Networks

    NASA Astrophysics Data System (ADS)

    Wang, Zhiliang; Yin, Xia; Xiang, Yang; Zhu, Ruiping; Gao, Shirui; Wu, Xin; Liu, Shijian; Gao, Song; Zhou, Li; Li, Peng

    Mobile broadcast service is one of the emerging most important new services in 3G networks. To better operate and manage mobile broadcast services, mobile broadcast business management system (MBBMS) should be designed and developed. Such a system, with its distributed nature, complicated XML data and security mechanism, faces many challenges in testing technology. In this paper, we study the conformance testing methodology of MBBMS, and design and implement a MBBMS protocol conformance testing tool based on TTCN-3, a standardized test description language that can be used in black-box testing of reactive and distributed system. In this methodology and testing tool, we present a semi-automatic XML test data generation method of TTCN-3 test suite and use HMSC model to help the design of test suite. In addition, we also propose an integrated testing method for hierarchical MBBMS security architecture. This testing tool has been used in industrial level’s testing.

  19. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    PubMed

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  1. Nondestructive measurement of the refractive index distribution of a glass molded lens by two-wavelength wavefronts.

    PubMed

    Sugimoto, Tomohiro

    2016-10-01

    This paper presents a nondestructive and non-exact-index-matching method for measuring the refractive index distribution of a glass molded lens with high refractivity. The method measures two-wavelength wavefronts of a test lens immersed in a liquid with a refractive index dispersion different from that of the test lens and calculates the refractive index distribution by eliminating the refractive index distribution error caused by the shape error of the test lens. The estimated uncertainties of the refractive index distributions of test lenses with nd≈1.77 and nd≈1.85 were 1.9×10-5  RMS and 2.4×10-5  RMS, respectively. I validated the proposed method by evaluating the agreement between the estimated uncertainties and experimental values.

  2. A comparative review of methods for comparing means using partially paired data.

    PubMed

    Guo, Beibei; Yuan, Ying

    2017-06-01

    In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.

  3. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  4. A method for developing design diagrams for ceramic and glass materials using fatigue data

    NASA Technical Reports Server (NTRS)

    Heslin, T. M.; Magida, M. B.; Forrest, K. A.

    1986-01-01

    The service lifetime of glass and ceramic materials can be expressed as a plot of time-to-failure versus applied stress whose plot is parametric in percent probability of failure. This type of plot is called a design diagram. Confidence interval estimates for such plots depend on the type of test that is used to generate the data, on assumptions made concerning the statistical distribution of the test results, and on the type of analysis used. This report outlines the development of design diagrams for glass and ceramic materials in engineering terms using static or dynamic fatigue tests, assuming either no particular statistical distribution of test results or a Weibull distribution and using either median value or homologous ratio analysis of the test results.

  5. Sequential Testing of Hypotheses Concerning the Reliability of a System Modeled by a Two-Parameter Weibull Distribution.

    DTIC Science & Technology

    1981-12-01

    CONCERNING THE RELIABILITY OF A SYSTEM MODELED BY A TWO-PARAMETER WEIBULL DISTRIBUTION THESIS AFIT/GOR/MA/81D-8 Philippe A. Lussier 2nd Lt USAF... MODELED BY A TWO-PARAMETER WEIBULL DISTRIBUTION THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology...repetitions are used for these test procedures. vi Sequential Testing of Hypotheses Concerning the Reliability of a System Modeled by a Two-Parameter

  6. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  7. Distributing Radiant Heat in Insulation Tests

    NASA Technical Reports Server (NTRS)

    Freitag, H. J.; Reyes, A. R.; Ammerman, M. C.

    1986-01-01

    Thermally radiating blanket of stepped thickness distributes heat over insulation sample during thermal vacuum testing. Woven of silicon carbide fibers, blanket spreads heat from quartz lamps evenly over insulation sample. Because of fewer blanket layers toward periphery of sample, more heat initially penetrates there for more uniform heat distribution.

  8. Sequential Computerized Mastery Tests--Three Simulation Studies

    ERIC Educational Resources Information Center

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  9. A maximally selected test of symmetry about zero.

    PubMed

    Laska, Eugene; Meisner, Morris; Wanderling, Joseph

    2012-11-20

    The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Distributed practice. The more the merrier? A randomised bronchoscopy simulation study

    PubMed Central

    Bjerrum, Anne Sofie; Eika, Berit; Charles, Peder; Hilberg, Ole

    2016-01-01

    Introduction The distribution of practice affects the acquisition of skills. Distributed practice has shown to be more effective for skills acquisition than massed training. However, it remains unknown as to which is the most effective distributed practice schedule for learning bronchoscopy skills through simulation training. This study compares two distributed practice schedules: One-day distributed practice and weekly distributed practice. Method Twenty physicians in training were randomly assigned to one-day distributed or weekly distributed bronchoscopy simulation practice. Performance was assessed with a pre-test, a post-test after each practice session, and a 4-week retention test using previously validated simulator measures. Data were analysed with repeated measures ANOVA. Results No interaction was found between group and test (F(4,72) <1.68, p>0.16), except for the measure ‘percent-segments-entered’, and no main effect of group was found for any of the measures (F(1,72)< 0.87, p>0.36), which indicates that there was no difference between the learning curves of the one-day distributed practice schedule and the weekly distributed practice schedule. Discussion We found no difference in effectiveness of bronchoscopy skills acquisition between the one-day distributed practice and the weekly distributed practice. This finding suggests that the choice of bronchoscopy training practice may be guided by what best suits the clinical practice. PMID:27172423

  11. Distributed practice. The more the merrier? A randomised bronchoscopy simulation study.

    PubMed

    Bjerrum, Anne Sofie; Eika, Berit; Charles, Peder; Hilberg, Ole

    2016-01-01

    Introduction The distribution of practice affects the acquisition of skills. Distributed practice has shown to be more effective for skills acquisition than massed training. However, it remains unknown as to which is the most effective distributed practice schedule for learning bronchoscopy skills through simulation training. This study compares two distributed practice schedules: One-day distributed practice and weekly distributed practice. Method Twenty physicians in training were randomly assigned to one-day distributed or weekly distributed bronchoscopy simulation practice. Performance was assessed with a pre-test, a post-test after each practice session, and a 4-week retention test using previously validated simulator measures. Data were analysed with repeated measures ANOVA. Results No interaction was found between group and test (F(4,72) <1.68, p>0.16), except for the measure 'percent-segments-entered', and no main effect of group was found for any of the measures (F(1,72)< 0.87, p>0.36), which indicates that there was no difference between the learning curves of the one-day distributed practice schedule and the weekly distributed practice schedule. Discussion We found no difference in effectiveness of bronchoscopy skills acquisition between the one-day distributed practice and the weekly distributed practice. This finding suggests that the choice of bronchoscopy training practice may be guided by what best suits the clinical practice.

  12. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  13. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE PAGES

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.; ...

    2018-04-19

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  14. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  15. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  16. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  17. SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution

    Science.gov Websites

    statistical summary of the U.S. distribution systems World-class, high spatial/temporal resolution of solar Systems and Scenarios | Grid Modernization | NREL SMART-DS: Synthetic Models for Advanced , Realistic Testing: Distribution Systems and Scenarios SMART-DS: Synthetic Models for Advanced, Realistic

  18. Best Statistical Distribution of flood variables for Johor River in Malaysia

    NASA Astrophysics Data System (ADS)

    Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.

    2012-12-01

    A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow

  19. A Permutation-Randomization Approach to Test the Spatial Distribution of Plant Diseases.

    PubMed

    Lione, G; Gonthier, P

    2016-01-01

    The analysis of the spatial distribution of plant diseases requires the availability of trustworthy geostatistical methods. The mean distance tests (MDT) are here proposed as a series of permutation and randomization tests to assess the spatial distribution of plant diseases when the variable of phytopathological interest is categorical. A user-friendly software to perform the tests is provided. Estimates of power and type I error, obtained with Monte Carlo simulations, showed the reliability of the MDT (power > 0.80; type I error < 0.05). A biological validation on the spatial distribution of spores of two fungal pathogens causing root rot on conifers was successfully performed by verifying the consistency between the MDT responses and previously published data. An application of the MDT was carried out to analyze the relation between the plantation density and the distribution of the infection of Gnomoniopsis castanea, an emerging fungal pathogen causing nut rot on sweet chestnut. Trees carrying nuts infected by the pathogen were randomly distributed in areas with different plantation densities, suggesting that the distribution of G. castanea was not related to the plantation density. The MDT could be used to analyze the spatial distribution of plant diseases both in agricultural and natural ecosystems.

  20. Slow test charge response in a dusty plasma with Kappa distributed electrons and ions

    NASA Astrophysics Data System (ADS)

    Ali, S.; Eliasson, B.

    2017-08-01

    The electrostatic potential around a slowly moving test charge is studied in a dusty plasma where the ions and electrons follow a powerlaw Kappa distribution in velocity space. A test charge moving with a speed much smaller than the dust thermal speed gives rise to a short-scale Debye-Hückel potential as well as a long-range far-field potential decreasing as inverse cube of the distance to the test charge along the propagation direction. The potentials are significantly modified in the presence of high-energy tails, modeled by lower spectral indices in the ion and electron Kappa distribution functions. Plasma parameters relevant to laboratory dusty plasmas are discussed.

  1. Effect of Multiaxial Loading on Crack Growth. Volume 2. Compilation of Experimental Data

    DTIC Science & Technology

    1978-12-01

    3121 9. PERFORMING ORGANIZATION NAME AND ADORESS 10 . PROGRAM ELEMENT. PROJECT, TASK Northrop Corporation AREA & WORK UNIT NUMBERS Aircraft Group 3901...Stresses in the Center of the 2024-T351 9 Cruciform Specimen 9 Stress Distribution along the X-axis of the 10 Cruciform Specimen 10 Stress Distribution...Tensile Test Results for 7075-T7351 584 8 Tensile Test Results for 7075-T7351 600 9 Tensile Test Results for 2024-T351 610 10 Tensile Test Results for

  2. Results of pressure distribution tests of a 0.010-scale space shuttle orbiter model (61-0) in the NASA/ARC 3.5-foot hypersonic wind tunnel (test OH38), volume 1

    NASA Technical Reports Server (NTRS)

    Dye, W. H.; Polek, T.

    1975-01-01

    Test results are presented of hypersonic pressure distributions at simulated atmospheric entry conditions. Pressure data were obtained at Mach numbers of 7.4 and 10.4 and Reynolds numbers of 3.0 and 6.5 million per foot. Data are presented in both plotted and tabulated data form. Photographs of wind tunnel apparatus and test configurations are provided.

  3. Slow crack growth test method for polyethylene gas pipes. Volume 1. Topical report, December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leis, B.; Ahmad, J.; Forte, T.

    1992-12-01

    In spite of the excellent performance record of polyethylene (PE) pipes used for gas distribution, a small number of leaks occur in distribution systems each year because of slow growth of cracks through pipe walls. The Slow Crack Growth Test (SCG) has been developed as a key element in a methodology for the assessment of the performance of polyethylene gas distribution systems to resist such leaks. This tropical report describes work conducted in the first part of the research directed at the initial development of the SCG test, including a critical evaluation of the applicability of the SCG test asmore » an element in PE gas pipe system performance methodology. Results of extensive experiments and analysis are reported. The results show that the SCG test should be very useful in performance assessment.« less

  4. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  5. Technology Solutions Case Study: Ventilation System Effectiveness and Tested Indoor Air Quality Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Rudd and D. Bergey

    Ventilation system effectiveness testing was conducted at two unoccupied, single-family, detached lab homes at the University of Texas - Tyler. Five ventilation system tests were conducted with various whole-building ventilation systems. Multizone fan pressurization testing characterized building and zone enclosure leakage. PFT testing showed multizone air change rates and interzonal airflow filtration. Indoor air recirculation by a central air distribution system can help improve the exhaust ventilation system by way of air mixing and filtration. In contrast, the supply and balanced ventilation systems showed that there is a significant benefit to drawing outside air from a known outside location, andmore » filtering and distributing that air. Compared to the Exhaust systems, the CFIS and ERV systems showed better ventilation air distribution and lower concentrations of particulates, formaldehyde and other VOCs.« less

  6. Effectiveness of motor sequential learning according to practice schedules in healthy adults; distributed practice versus massed practice

    PubMed Central

    Kwon, Yong Hyun; Kwon, Jung Won; Lee, Myoung Hee

    2015-01-01

    [Purpose] The purpose of the current study was to compare the effectiveness of motor sequential learning according to two different types of practice schedules, distributed practice schedule (two 12-hour inter-trial intervals) and massed practice schedule (two 10-minute inter-trial intervals) using a serial reaction time (SRT) task. [Subjects and Methods] Thirty healthy subjects were recruited and then randomly and evenly assigned to either the distributed practice group or the massed practice group. All subjects performed three consecutive sessions of the SRT task following one of the two different types of practice schedules. Distributed practice was scheduled for two 12-hour inter-session intervals including sleeping time, whereas massed practice was administered for two 10-minute inter-session intervals. Response time (RT) and response accuracy (RA) were measured in at pre-test, mid-test, and post-test. [Results] For RT, univariate analysis demonstrated significant main effects in the within-group comparison of the three tests as well as the interaction effect of two groups × three tests, whereas the between-group comparison showed no significant effect. The results for RA showed no significant differences in neither the between-group comparison nor the interaction effect of two groups × three tests, whereas the within-group comparison of the three tests showed a significant main effect. [Conclusion] Distributed practice led to enhancement of motor skill acquisition at the first inter-session interval as well as at the second inter-interval the following day, compared to massed practice. Consequentially, the results of this study suggest that a distributed practice schedule can enhance the effectiveness of motor sequential learning in 1-day learning as well as for two days learning formats compared to massed practice. PMID:25931727

  7. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Treesearch

    Steve P. Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...

  8. The Dynamics of the Evolution of the Black-White Test Score Gap

    ERIC Educational Resources Information Center

    Sohn, Kitae

    2012-01-01

    We apply a quantile version of the Oaxaca-Blinder decomposition to estimate the counterfactual distribution of the test scores of Black students. In the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K), we find that the gap initially appears only at the top of the distribution of test scores. As children age, however,…

  9. Real Time Cockpit Resource Management (CRM) Training

    DTIC Science & Technology

    2010-10-01

    to post-test. Table 4 Learning Scores for the Five Spiral 1 Classes Spiral 1 Class Pilots Sensors Pretest Posttest Difference Pretest Posttest ...results from the five Spiral 1 classes. Table 6 Pretest / Posttest Gain Scores Associated with Each Learning Test Item Test Item Class Item...SMALL BUSINESS INNOVATION RESEARCH (SBIR) PHASE II REPORT. Distribution A: Approved for public release; distribution unlimited. (Approval given

  10. K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents

    ERIC Educational Resources Information Center

    Gwanyama, Philip Wagala

    2005-01-01

    The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…

  11. The Influence of an NCLB Accountability Plan on the Distribution of Student Test Score Gains

    ERIC Educational Resources Information Center

    Springer, Matthew G.

    2008-01-01

    Previous research on the effect of accountability programs on the distribution of student test score gains is decidedly mixed. This study examines the issue by estimating an educational production function in which test score gains are a function of the incentives schools have to focus instruction on below-proficient students. NCLB's threat of…

  12. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  13. Evolution of A Distributed Live, Virtual, Constructive Environment for Human in the Loop Unmanned Aircraft Testing

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Otto, Neil M.

    2017-01-01

    NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The project's integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.

  14. Evolution of A Distributed Live, Virtual, Constructive Environment for Human in the Loop Unmanned Aircraft Testing

    NASA Technical Reports Server (NTRS)

    Murphy, Jim; Otto, Neil

    2017-01-01

    NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The projects integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.

  15. Analysis of field size distributions, LACIE test sites 5029, 5033, and 5039, Anhwei Province, People's Republic of China

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1976-01-01

    A study was made of the field size distributions for LACIE test sites 5029, 5033, and 5039, People's Republic of China. Field lengths and widths were measured from LANDSAT imagery, and field area was statistically modeled. Field size parameters have log-normal or Poisson frequency distributions. These were normalized to the Gaussian distribution and theoretical population curves were made. When compared to fields in other areas of the same country measured in the previous study, field lengths and widths in the three LACIE test sites were 2 to 3 times smaller and areas were smaller by an order of magnitude.

  16. The Probability of Obtaining Two Statistically Different Test Scores as a Test Index

    ERIC Educational Resources Information Center

    Muller, Jorg M.

    2006-01-01

    A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…

  17. Comparing Different Fault Identification Algorithms in Distributed Power System

    NASA Astrophysics Data System (ADS)

    Alkaabi, Salim

    A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.

  18. Spacecraft thermal balance testing using infrared sources

    NASA Technical Reports Server (NTRS)

    Tan, G. B. T.; Walker, J. B.

    1982-01-01

    A thermal balance test (controlled flux intensity) on a simple black dummy spacecraft using IR lamps was performed and evaluated, the latter being aimed specifically at thermal mathematical model (TMM) verification. For reference purposes the model was also subjected to a solar simulation test (SST). The results show that the temperature distributions measured during IR testing for two different model attitudes under steady state conditions are reproducible with a TMM. The TMM test data correlation is not as accurate for IRT as for SST. Using the standard deviation of the temperature difference distribution (analysis minus test) the SST data correlation is better by a factor of 1.8 to 2.5. The lower figure applies to the measured and the higher to the computer-generated IR flux intensity distribution. Techniques of lamp power control are presented. A continuing work program is described which is aimed at quantifying the differences between solar simulation and infrared techniques for a model representing the thermal radiating surfaces of a large communications spacecraft.

  19. Bayes Factor Covariance Testing in Item Response Models.

    PubMed

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  20. Bug Distribution and Statistical Pattern Classification.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    1987-01-01

    The rule space model permits measurement of cognitive skill acquisition and error diagnosis. Further discussion introduces Bayesian hypothesis testing and bug distribution. An illustration involves an artificial intelligence approach to testing fractions and arithmetic. (Author/GDC)

  1. Development and implementation of a quality assurance program for a hormonal contraceptive implant.

    PubMed

    Owen, Derek H; Jenkins, David; Cancel, Aida; Carter, Eli; Dorflinger, Laneta; Spieler, Jeff; Steiner, Markus J

    2013-04-01

    The importance of the distribution of safe, effective and cost-effective pharmaceutical products in resource-constrained countries is the subject of increasing attention. FHI 360 has developed a program aimed at evaluating the quality of a contraceptive implant manufactured in China, while the product is being registered in an increasing number of countries and distributed by international procurement agencies. The program consists of (1) independent product testing; (2) ongoing evaluation of the manufacturing facility through audits and inspections; and (3) post-marketing surveillance. This article focuses on the laboratory testing of the product. The various test methods were chosen from the following test method compendia, the United States Pharmacopeia (USP), British Pharmacopeia (BP), International Organization for Standardization (ISO), the American Society for Testing and Materials (ASTM), or lot release tests mandated by Chinese regulatory requirements. Each manufactured lot is independently tested prior to its distribution to countries supported by this program. In addition, a more detailed annual testing program includes evaluation of the active ingredient (levonorgestrel), the final product and the packaging material. Over the first 4 years of this 5-year project, all tested lots met the established quality criteria. The quality assurance program developed for this contraceptive implant has helped ensure that a safe product was being introduced into developing country family planning programs. This program provides a template for establishing quality assurance programs for other cost-effective pharmaceutical products that have not yet received stringent regulatory approval and are being distributed in resource-poor settings. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Statistical tests of peaks and periodicities in the observed redshift distribution of quasi-stellar objects

    NASA Astrophysics Data System (ADS)

    Duari, Debiprosad; Gupta, Patrick D.; Narlikar, Jayant V.

    1992-01-01

    An overview of statistical tests of peaks and periodicities in the redshift distribution of quasi-stellar objects is presented. The tests include the power-spectrum analysis carried out by Burbidge and O'Dell (1972), the generalized Rayleigh test, the Kolmogorov-Smirnov test, and the 'comb-tooth' test. The tests reveal moderate to strong evidence for periodicities of 0.0565 and 0.0127-0.0129. The confidence level of the periodicity of 0.0565 in fact marginally increases when redshifts are transformed to the Galactocentric frame. The same periodicity, first noticed in 1968, persists to date with a QSO population that has since grown about 30 times its original size. The prima facie evidence for periodicities in 1n(1 + z) is found to be of no great significance.

  3. Approximation of the breast height diameter distribution of two-cohort stands by mixture models II Goodness-of-fit tests

    Treesearch

    Rafal Podlaski; Francis .A. Roesch

    2013-01-01

    The goals of this study are (1) to analyse the accuracy of the approximation of empirical distributions of diameter at breast height (dbh) using two-component mixtures of either the Weibull distribution or the gamma distribution in two−cohort stands, and (2) to discuss the procedure of choosing goodness−of−fit tests. The study plots were...

  4. Distributed Leadership and High-Stakes Testing: Examining the Relationship between Distributed Leadership and LEAP Scores

    ERIC Educational Resources Information Center

    Boudreaux, Wilbert

    2011-01-01

    Educational stakeholders are aware that school administration has become an incredibly intricate dynamic that is too complex for principals to handle alone. Test-driven accountability has made the already daunting task of school administration even more challenging. Distributed leadership presents an opportunity to explore increased leadership…

  5. Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models

    ERIC Educational Resources Information Center

    Chun, So Yeon; Shapiro, Alexander

    2009-01-01

    The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…

  6. Bias in Mental Testing.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    The first eight chapters of this book introduce the topic of test bias. The basic issues involved in criticisms of mental tests and arguments about test bias include: (1) variety of tests and test items; (2) scaling of scores and the form of the distribution of abilities in the population; (3) quantification of subpopulation differences; (4)…

  7. Multiple comparisons permutation test for image based data mining in radiotherapy.

    PubMed

    Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel

    2013-12-23

    : Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy.

  8. Test Market Media Relations as a Pilot Test Component in a Nationwide Class Action Settlement Distribution.

    ERIC Educational Resources Information Center

    Pellecchia, Michael

    Results of a pilot test for a public relations campaign to assist in the distribution of funds from the settlement of a nationwide class action suit brought by tenants against the Department of Housing and Urban Development (HUD) are presented in this report. The first chapter presents the background of the case, noting that tenants of Section 236…

  9. The Risk of Adverse Impact in Selections Based on a Test with Known Effect Size

    ERIC Educational Resources Information Center

    De Corte, Wilfried; Lievens, Filip

    2005-01-01

    The authors derive the exact sampling distribution function of the adverse impact (AI) ratio for single-stage, top-down selections using tests with known effect sizes. Subsequently, it is shown how this distribution function can be used to determine the risk that a future selection decision on the basis of such tests will result in an outcome that…

  10. Fidelity and Validity in Distributed Interactive Simulation: Questions and Answers

    DTIC Science & Technology

    1992-11-01

    future work in (a) collective training (b) the development and evaluation of tactical concepts and doctrine, (c) system test and evaluation, and (d...exercises. 14. SUBJECT TERMS 15. NUMBER OF PAGES distributed interactive simulation, simulation, training, test and evaluation, 37 simulator fidelity...revolutionizing future work in (a) collective training, (b) the development and evaluation of tactical concepts and doctrine, (c) system test and evaluation

  11. Full-field fabric stress mapping by micro Raman spectroscopy in a yarn push-out test.

    PubMed

    Lei, Z K; Qin, F Y; Fang, Q C; Bai, R X; Qiu, W; Chen, X

    2018-02-01

    The full-field stress distribution of a two-dimensional plain fabric was mapped using micro Raman spectroscopy (MRS) through a novel yarn push-out test, simulating a quasi-static projectile impact on the fabric. The stress-strain relationship for a single yarn was established using a digital image correlation method in a single-yarn tensile test. The relationship between Raman peak shift and aramid Kevlar 49 yarn stress was established using MRS in a single-yarn tensile test. An out-of-plane loading test was conducted on an aramid Kevlar 49 plain fabric, and the yarn stress was measured using MRS. From the full-field fabric stress distribution, it can be observed that there is a cross-shaped distribution of high yarn stress; this result would be helpful in further studies on load transfer on a fabric during a projectile impact.

  12. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  13. Equivalence Testing of Complex Particle Size Distribution Profiles Based on Earth Mover's Distance.

    PubMed

    Hu, Meng; Jiang, Xiaohui; Absar, Mohammad; Choi, Stephanie; Kozak, Darby; Shen, Meiyu; Weng, Yu-Ting; Zhao, Liang; Lionberger, Robert

    2018-04-12

    Particle size distribution (PSD) is an important property of particulates in drug products. In the evaluation of generic drug products formulated as suspensions, emulsions, and liposomes, the PSD comparisons between a test product and the branded product can provide useful information regarding in vitro and in vivo performance. Historically, the FDA has recommended the population bioequivalence (PBE) statistical approach to compare the PSD descriptors D50 and SPAN from test and reference products to support product equivalence. In this study, the earth mover's distance (EMD) is proposed as a new metric for comparing PSD particularly when the PSD profile exhibits complex distribution (e.g., multiple peaks) that is not accurately described by the D50 and SPAN descriptor. EMD is a statistical metric that measures the discrepancy (distance) between size distribution profiles without a prior assumption of the distribution. PBE is then adopted to perform statistical test to establish equivalence based on the calculated EMD distances. Simulations show that proposed EMD-based approach is effective in comparing test and reference profiles for equivalence testing and is superior compared to commonly used distance measures, e.g., Euclidean and Kolmogorov-Smirnov distances. The proposed approach was demonstrated by evaluating equivalence of cyclosporine ophthalmic emulsion PSDs that were manufactured under different conditions. Our results show that proposed approach can effectively pass an equivalent product (e.g., reference product against itself) and reject an inequivalent product (e.g., reference product against negative control), thus suggesting its usefulness in supporting bioequivalence determination of a test product to the reference product which both possess multimodal PSDs.

  14. Reliability of provocative tests of motion sickness susceptibility

    NASA Technical Reports Server (NTRS)

    Calkins, D. S.; Reschke, M. F.; Kennedy, R. S.; Dunlop, W. P.

    1987-01-01

    Test-retest reliability values were derived from motion sickness susceptibility scores obtained from two successive exposures to each of three tests: (1) Coriolis sickness sensitivity test; (2) staircase velocity movement test; and (3) parabolic flight static chair test. The reliability of the three tests ranged from 0.70 to 0.88. Normalizing values from predictors with skewed distributions improved the reliability.

  15. Accreditation status and geographic location of outpatient vascular testing facilities among Medicare beneficiaries: the VALUE (Vascular Accreditation, Location & Utilization Evaluation) study.

    PubMed

    Rundek, Tatjana; Brown, Scott C; Wang, Kefeng; Dong, Chuanhui; Farrell, Mary Beth; Heller, Gary V; Gornik, Heather L; Hutchisson, Marge; Needleman, Laurence; Benenati, James F; Jaff, Michael R; Meier, George H; Perese, Susana; Bendick, Phillip; Hamburg, Naomi M; Lohr, Joann M; LaPerna, Lucy; Leers, Steven A; Lilly, Michael P; Tegeler, Charles; Alexandrov, Andrei V; Katanick, Sandra L

    2014-10-01

    There is limited information on the accreditation status and geographic distribution of vascular testing facilities in the US. The Centers for Medicare & Medicaid Services (CMS) provide reimbursement to facilities regardless of accreditation status. The aims were to: (1) identify the proportion of Intersocietal Accreditation Commission (IAC) accredited vascular testing facilities in a 5% random national sample of Medicare beneficiaries receiving outpatient vascular testing services; (2) describe the geographic distribution of these facilities. The VALUE (Vascular Accreditation, Location & Utilization Evaluation) Study examines the proportion of IAC accredited facilities providing vascular testing procedures nationally, and the geographic distribution and utilization of these facilities. The data set containing all facilities that billed Medicare for outpatient vascular testing services in 2011 (5% CMS Outpatient Limited Data Set (LDS) file) was examined, and locations of outpatient vascular testing facilities were obtained from the 2011 CMS/Medicare Provider of Services (POS) file. Of 13,462 total vascular testing facilities billing Medicare for vascular testing procedures in a 5% random Outpatient LDS for the US in 2011, 13% (n=1730) of facilities were IAC accredited. The percentage of IAC accredited vascular testing facilities in the LDS file varied significantly by US region, p<0.0001: 26%, 12%, 11%, and 7% for the Northeast, South, Midwest, and Western regions, respectively. Findings suggest that the proportion of outpatient vascular testing facilities that are IAC accredited is low and varies by region. Increasing the number of accredited vascular testing facilities to improve test quality is a hypothesis that should be tested in future research. © The Author(s) 2014.

  16. Full-scale Force and Pressure-distribution Tests on a Tapered U.S.A. 45 Airfoil

    NASA Technical Reports Server (NTRS)

    Parsons, John F

    1935-01-01

    This report presents the results of force and pressure-distribution tests on a 2:1 tapered USA 45 airfoil as determined in the full-scale wind tunnel. The airfoil has a constant-chord center section and rounded tips and is tapered in thickness from 18 percent at the root to 9 percent at the tip. Force tests were made throughout a Reynolds Number range of approximately 2,000,000 to 8,000,000 providing data on the scale effect in addition to the conventional characteristics. Pressure-distribution data were obtained from tests at a Reynolds Number of approximately 4,000,000. The aerodynamic characteristics given by the usual dimensionless coefficients are presented graphically.

  17. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  18. Quiet Clean Short-haul Experimental Engine (QCSEE) under-the-wing engine composite fan blade design report

    NASA Technical Reports Server (NTRS)

    Ravenhall, R.; Salemme, C. T.

    1977-01-01

    A total of 38 quiet clean short haul experimental engine under the wing composite fan blades were manufactured for various component tests, process and tooling, checkout, and use in the QCSEE UTW engine. The component tests included frequency characterization, strain distribution, bench fatigue, platform static load, whirligig high cycle fatigue, whirligig low cycle fatigue, whirligig strain distribution, and whirligig over-speed. All tests were successfully completed. All blades planned for use in the engine were subjected to and passed a whirligig proof spin test.

  19. Zonation in the deep benthic megafauna : Application of a general test.

    PubMed

    Gardiner, Frederick P; Haedrich, Richard L

    1978-01-01

    A test based on Maxwell-Boltzman statistics, instead of the formerly suggested but inappropriate Bose-Einstein statistics (Pielou and Routledge, 1976), examines the distribution of the boundaries of species' ranges distributed along a gradient, and indicates whether they are random or clustered (zoned). The test is most useful as a preliminary to the application of more instructive but less statistically rigorous methods such as cluster analysis. The test indicates zonation is marked in the deep benthic megafauna living between 200 and 3000 m, but below 3000 m little zonation may be found.

  20. The Application of Hardware in the Loop Testing for Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Thomas, George L.; Culley, Dennis E.; Brand, Alex

    2016-01-01

    The essence of a distributed control system is the modular partitioning of control function across a hardware implementation. This type of control architecture requires embedding electronics in a multitude of control element nodes for the execution of those functions, and their integration as a unified system. As the field of distributed aeropropulsion control moves toward reality, questions about building and validating these systems remain. This paper focuses on the development of hardware-in-the-loop (HIL) test techniques for distributed aero engine control, and the application of HIL testing as it pertains to potential advanced engine control applications that may now be possible due to the intelligent capability embedded in the nodes.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  2. Creating Composite Age Groups to Smooth Percentile Rank Distributions of Small Samples

    ERIC Educational Resources Information Center

    Lopez, Francesca; Olson, Amy; Bansal, Naveen

    2011-01-01

    Individually administered tests are often normed on small samples, a process that may result in irregularities within and across various age or grade distributions. Test users often smooth distributions guided by Thurstone assumptions (normality and linearity) to result in norms that adhere to assumptions made about how the data should look. Test…

  3. Privacy-preserving Kruskal-Wallis test.

    PubMed

    Guo, Suxin; Zhong, Sheng; Zhang, Aidong

    2013-10-01

    Statistical tests are powerful tools for data analysis. Kruskal-Wallis test is a non-parametric statistical test that evaluates whether two or more samples are drawn from the same distribution. It is commonly used in various areas. But sometimes, the use of the method is impeded by privacy issues raised in fields such as biomedical research and clinical data analysis because of the confidential information contained in the data. In this work, we give a privacy-preserving solution for the Kruskal-Wallis test which enables two or more parties to coordinately perform the test on the union of their data without compromising their data privacy. To the best of our knowledge, this is the first work that solves the privacy issues in the use of the Kruskal-Wallis test on distributed data. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. An Empirical Comparison of Two-Stage and Pyramidal Adaptive Ability Testing.

    ERIC Educational Resources Information Center

    Larkin, Kevin C.; Weiss, David J.

    A 15-stage pyramidal test and a 40-item two-stage test were constructed and administered by computer to 111 college undergraduates. The two-stage test was found to utilize a smaller proportion of its potential score range than the pyramidal test. Score distributions for both tests were positively skewed but not significantly different from the…

  5. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  6. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  7. Damage Processes in a Quasi-Isotropic Composite Short Beam Under Three- Point Loading

    DTIC Science & Technology

    1992-01-01

    American Society for Testing and Materials, 1916 Race Street, Philadelphia, PA 19103 12a. DISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE...three- point bend test Is investigated for a composite with a quasi-isotropic layup. Failue is found to Initiate iri a region near the point of...Composites Technology & Research, Winter 1991 Copyright American Society for Testing and Materials, 1916 Race Street, Philadelphia, PA 19103 REFERENCE

  8. Unit Under Test Simulator Feasibility Study.

    DTIC Science & Technology

    1980-06-01

    interlocking connectors to conceptual differences such as octopus types of cables. 0 The validity of the IA description to the UUT simulator. Although...Research Institute, January 1978. 146. Ring , S. J. "Automatic Testing Via a Distributed Intelligence Processing System." Autotestcon 77, 2-4 November 1977...pp. 89-98. 147. Ring , S. J. "A Distributed Intelligence Automatic Test System for PATRIOT." IEEE Trans. 1977, Aerosp. and Electron Systems, Vol. AES

  9. Tests of Fit for Asymmetric Laplace Distributions with Applications on Financial Data

    NASA Astrophysics Data System (ADS)

    Fragiadakis, Kostas; Meintanis, Simos G.

    2008-11-01

    New goodness-of-fit tests for the family of asymmetric Laplace distributions are constructed. The proposed tests are based on a weighted integral incorporating the empirical characteristic function of suitably standardized data, and can be written in a closed form appropriate for computer implementation. Monte Carlo results show that the new procedure are competitive with classical goodness-of-fit methods. Applications with financial data are also included.

  10. Test Protocol for Room-to-Room Distribution of Outside Air by Residential Ventilation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barley, C. D.; Anderson, R.; Hendron, B.

    2007-12-01

    This test and analysis protocol has been developed as a practical approach for measuring outside air distribution in homes. It has been used successfully in field tests and has led to significant insights on ventilation design issues. Performance advantages of more sophisticated ventilation systems over simpler, less-costly designs have been verified, and specific problems, such as airflow short-circuiting, have been identified.

  11. Robust multivariate nonparametric tests for detection of two-sample location shift in clinical trials

    PubMed Central

    Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo

    2018-01-01

    This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555

  12. Comparative analysis through probability distributions of a data set

    NASA Astrophysics Data System (ADS)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  13. Real time testing of intelligent relays for synchronous distributed generation islanding detection

    NASA Astrophysics Data System (ADS)

    Zhuang, Davy

    As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.

  14. The retest distribution of the visual field summary index mean deviation is close to normal.

    PubMed

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  15. [Reference values for the blood coagulation tests in Mexico: usefulness of the pooled plasma from blood donors].

    PubMed

    Calzada-Contreras, Adriana; Moreno-Hernández, Manuel; Castillo-Torres, Noemi Patricia; Souto-Rosillo, Guadalupe; Hernández-Juárez, Jesús; Ricardo-Moreno, María Tania; Sánchez-Fernández, Maria Guadalupe de Jesús; García-González, América; Majluf-Cruz, Abraham

    2012-01-01

    The blood coagulation system maintains the blood in a liquid state and bleeding and thrombosis are the manifestations of its malfunction. Blood coagulation laboratory evaluates the physiology of this system. To establish both, the reference values for several tests performed at the blood coagulation laboratory as well as the utility of the pooled plasma to perform these assays. MATERIAL AND: In this descriptive, cross-sectional, randomized study, we collected plasma from Mexican Mestizos. Each pooled plasma was prepared with the plasma from at least 20 blood donors. We performed screening and special tests and the Levey-Jennings graphs were built and interpreted after each pass. Results of the tests were analyzed and their distribution was established using the Kolmogorov-Smirnov test. To establish the reference values we used 95% confidence intervals. We collected 72 pooled plasmas. The distribution for PT, APTT, and TT tests was abnormal. Although the PT test showed a bimodal distribution it was normal for factor VII. The reference values for the hemostatic, anticoagulant, and fibrinolytic factors were different from those suggested by the manufacturers. We established the reference values for the blood coagulation tests in the adult Mexican population. We have shown that the pooled plasma must be used for the screening tests. We suggest that each clinical laboratory should establish its own reference values (at least for the screening tests). To reach this objective, we encourage the use of the pooled plasma.

  16. HIFiRE Direct-Connect Rig (HDCR) Phase I Ground Test Results from the NASA Langley Arc-Heated Scramjet Test Facility

    NASA Technical Reports Server (NTRS)

    Hass, Neal E.; Cabell, Karen F.; Storch, Andrea M.

    2010-01-01

    The initial phase of hydrocarbon-fueled ground tests supporting Flight 2 of the Hypersonic International Flight Research Experiment (HIFiRE) Program has been conducted in the NASA Langley Arc-Heated Scramjet Test Facility (AHSTF). The HIFiRE Program, an Air Force-lead international cooperative program includes eight different flight test experiments designed to target specific challenges of hypersonic flight. The second of the eight planned flight experiments is a hydrocarbon-fueled scramjet flight test intended to demonstrate dual-mode to scramjet-mode operation and verify the scramjet performance prediction and design tools. A performance goal is the achievement of a combusted fuel equivalence ratio greater than 0.7 while in scramjet mode. The ground test rig, designated the HIFiRE Direct Connect Rig (HDCR), is a full-scale, heat sink, direct-connect ground test article that duplicates both the flowpath lines and the instrumentation layout of the isolator and combustor portion of the flight test hardware. The primary objectives of the HDCR Phase I tests are to verify the operability of the HIFiRE isolator/combustor across the Mach 6.0-8.0 flight regime and to establish a fuel distribution schedule to ensure a successful mode transition prior to the HiFIRE payload Critical Design Review. Although the phase I test plans include testing over the Mach 6 to 8 flight simulation range, only Mach 6 testing will be reported in this paper. Experimental results presented here include flowpath surface pressure, temperature, and heat flux distributions that demonstrate the operation of the flowpath over a small range of test conditions around the nominal Mach 6 simulation, as well as a range of fuel equivalence ratios and fuel injection distributions. Both ethylene and a mixture of ethylene and methane (planned for flight) were tested. Maximum back pressure and flameholding limits, as well as a baseline fuel schedule, that covers the Mach 5.84-6.5 test space have been identified.

  17. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  18. EXTENDING MULTIVARIATE DISTANCE MATRIX REGRESSION WITH AN EFFECT SIZE MEASURE AND THE ASYMPTOTIC NULL DISTRIBUTION OF THE TEST STATISTIC

    PubMed Central

    McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.

    2017-01-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957

  19. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    PubMed

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  20. A distributed programming environment for Ada

    NASA Technical Reports Server (NTRS)

    Brennan, Peter; Mcdonnell, Tom; Mcfarland, Gregory; Timmins, Lawrence J.; Litke, John D.

    1986-01-01

    Despite considerable commercial exploitation of fault tolerance systems, significant and difficult research problems remain in such areas as fault detection and correction. A research project is described which constructs a distributed computing test bed for loosely coupled computers. The project is constructing a tool kit to support research into distributed control algorithms, including a distributed Ada compiler, distributed debugger, test harnesses, and environment monitors. The Ada compiler is being written in Ada and will implement distributed computing at the subsystem level. The design goal is to provide a variety of control mechanics for distributed programming while retaining total transparency at the code level.

  1. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    EPA Pesticide Factsheets

    This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).

  2. A nonparametric smoothing method for assessing GEE models with longitudinal binary data.

    PubMed

    Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu

    2008-09-30

    Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.

  3. Multipath interference test method for distributed amplifiers

    NASA Astrophysics Data System (ADS)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  4. PrismTech Data Distribution Service Java API Evaluation

    NASA Technical Reports Server (NTRS)

    Riggs, Cortney

    2008-01-01

    My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.

  5. Conditional Tests for Localizing Trait Genes

    PubMed Central

    Di, Yanming; Thompson, Elizabeth A.

    2009-01-01

    Background/Aims With pedigree data, genetic linkage can be detected using inheritance vector tests, which explore the discrepancy between the posterior distribution of the inheritance vectors given observed trait values and the prior distribution of the inheritance vectors. In this paper, we propose conditional inheritance vector tests for linkage localization. These conditional tests can also be used to detect additional linkage signals in the presence of previously detected causal genes. Methods For linkage localization, we propose to perform inheritance vector tests conditioning on the inheritance vectors at two positions bounding a test region. We can detect additional linkage signals by conducting a further conditional test in a region with no previously detected genes. We use randomized p values to extend the marginal and conditional tests when the inheritance vectors cannot be completely determined from genetic marker data. Results We conduct simulation studies to compare and contrast the marginal and the conditional tests and to demonstrate that randomized p values can capture both the significance and the uncertainty in the test results. Conclusions The simulation results demonstrate that the proposed conditional tests provide useful localization information, and with informative marker data, the uncertainty in randomized marginal and conditional test results is small. PMID:19439976

  6. Distributions of Mutational Effects and the Estimation of Directional Selection in Divergent Lineages of Arabidopsis thaliana.

    PubMed

    Park, Briton; Rutter, Matthew T; Fenster, Charles B; Symonds, V Vaughan; Ungerer, Mark C; Townsend, Jeffrey P

    2017-08-01

    Mutations are crucial to evolution, providing the ultimate source of variation on which natural selection acts. Due to their key role, the distribution of mutational effects on quantitative traits is a key component to any inference regarding historical selection on phenotypic traits. In this paper, we expand on a previously developed test for selection that could be conducted assuming a Gaussian mutation effect distribution by developing approaches to also incorporate any of a family of heavy-tailed Laplace distributions of mutational effects. We apply the test to detect directional natural selection on five traits along the divergence of Columbia and Landsberg lineages of Arabidopsis thaliana , constituting the first test for natural selection in any organism using quantitative trait locus and mutation accumulation data to quantify the intensity of directional selection on a phenotypic trait. We demonstrate that the results of the test for selection can depend on the mutation effect distribution specified. Using the distributions exhibiting the best fit to mutation accumulation data, we infer that natural directional selection caused divergence in the rosette diameter and trichome density traits of the Columbia and Landsberg lineages. Copyright © 2017 by the Genetics Society of America.

  7. Reliability demonstration test for load-sharing systems with exponential and Weibull components

    PubMed Central

    Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030

  8. Reliability demonstration test for load-sharing systems with exponential and Weibull components.

    PubMed

    Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.

  9. Fisher's method of combining dependent statistics using generalizations of the gamma distribution with applications to genetic pleiotropic associations.

    PubMed

    Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang

    2014-04-01

    A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.

  10. GENERIC VERIFICATION PROTOCOL: DISTRIBUTED GENERATION AND COMBINED HEAT AND POWER FIELD TESTING PROTOCOL

    EPA Science Inventory

    This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...

  11. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Treesearch

    Steve Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.

  12. Transformation of arbitrary distributions to the normal distribution with application to EEG test-retest reliability.

    PubMed

    van Albada, S J; Robinson, P A

    2007-04-15

    Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.

  13. Application of ideal pressure distribution in development process of automobile seats.

    PubMed

    Kilincsoy, U; Wagner, A; Vink, P; Bubb, H

    2016-07-19

    In designing a car seat the ideal pressure distribution is important as it is the largest contact surface between the human and the car. Because of obstacles hindering a more general application of the ideal pressure distribution in seating design, multidimensional measuring techniques are necessary with extensive user tests. The objective of this study is to apply and integrate the knowledge about the ideal pressure distribution in the seat design process for a car manufacturer in an efficient way. Ideal pressure distribution was combined with pressure measurement, in this case pressure mats. In order to integrate this theoretical knowledge of seating comfort in the seat development process for a car manufacturer a special user interface was defined and developed. The mapping of the measured pressure distribution in real-time and accurately scaled to actual seats during test setups directly lead to design implications for seat design even during the test situation. Detailed analysis of the subject's feedback was correlated with objective measurements of the subject's pressure distribution in real time. Therefore existing seating characteristics were taken into account as well. A user interface can incorporate theoretical and validated 'state of the art' models of comfort. Consequently, this information can reduce extensive testing and lead to more detailed results in a shorter time period.

  14. HammerCloud: A Stress Testing System for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo

    2011-12-01

    Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).

  15. Standardized UXO Technology Demonstration Site Scoring Record No. 945

    DTIC Science & Technology

    2017-07-01

    DISTRIBUTION LIST ATEC Project No. 2011-DT-ATC-DODSP-F0292 Note: A copy of this test report has been posted to the Versatile Information Systems...Directorate July 2017 Report Produced by: U.S. Army Aberdeen Test Center Aberdeen Proving Ground, MD 21005-5059 Report Produced for: Strategic...U.S. Army Test and Evaluation Command Aberdeen Proving Ground, MD 21005-5001 Distribution Unlimited, July 2017. The use of a trade name or the

  16. Autonomy Community of Interest (COI) Test and Evaluation, Verification and Validation (TEVV) Working Group: Technology Investment Strategy 2015-2018

    DTIC Science & Technology

    2015-05-01

    Evaluation Center of Excellence SUAS Small Unmanned Aircraft System SUT System under Test T&E Test and Evaluation TARDEC Tank Automotive Research...17 Distribution A: Distribution Unlimited 2 Background In the past decade, unmanned systems have significantly impacted warfare...environments at a speed and scale beyond manned capability. However, current unmanned systems operate with minimal autonomy. To meet warfighter needs and

  17. Formal Process Modeling to Improve Human Decision-Making in Test and Evaluation Acoustic Range Control

    DTIC Science & Technology

    2017-09-01

    AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Test and...ambiguities and identify high -value decision points? This thesis explores how formalization of these experience-based decisions as a process model...representing a T&E event may reveal high -value decision nodes where certain decisions carry more weight or potential for impacts to a successful test. The

  18. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  19. Wind tunnel tests of rotor blade sections with replications of ice formations accreted in hover

    NASA Technical Reports Server (NTRS)

    Lee, J. D.; Berger, J. H.; Mcdonald, T. J.

    1986-01-01

    Full scale reproductions of ice accretions molded during the documentation of a hover test program were fabricated by means of epoxy castings and used for a wind tunnel test program. Surface static pressure distributions were recorded and used to evaluate lift and pitching moment increments while drag was determined by wake surveys. Through the range of the tests, corresponding to those conditions encountered in hover and in flat pitch, integration of the pressure distributions showed negligible changes in lift and in pitching moment, but the drag was significantly increased.

  20. Representing Color Ensembles.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  1. Asymptotic Distribution of the Likelihood Ratio Test Statistic for Sphericity of Complex Multivariate Normal Distribution.

    DTIC Science & Technology

    1981-08-01

    RATIO TEST STATISTIC FOR SPHERICITY OF COMPLEX MULTIVARIATE NORMAL DISTRIBUTION* C. Fang P. R. Krishnaiah B. N. Nagarsenker** August 1981 Technical...and their applications in time sEries, the reader is referred to Krishnaiah (1976). Motivated by the applications in the area of inference on multiple...for practical purposes. Here, we note that Krishnaiah , Lee and Chang (1976) approxi- mated the null distribution of certain power of the likeli

  2. Statistical homogeneity tests applied to large data sets from high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Trusina, J.; Franc, J.; Kůs, V.

    2017-12-01

    Homogeneity tests are used in high energy physics for the verification of simulated Monte Carlo samples, it means if they have the same distribution as a measured data from particle detector. Kolmogorov-Smirnov, χ 2, and Anderson-Darling tests are the most used techniques to assess the samples’ homogeneity. Since MC generators produce plenty of entries from different models, each entry has to be re-weighted to obtain the same sample size as the measured data has. One way of the homogeneity testing is through the binning. If we do not want to lose any information, we can apply generalized tests based on weighted empirical distribution functions. In this paper, we propose such generalized weighted homogeneity tests and introduce some of their asymptotic properties. We present the results based on numerical analysis which focuses on estimations of the type-I error and power of the test. Finally, we present application of our homogeneity tests to data from the experiment DØ in Fermilab.

  3. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  4. 75 FR 70753 - Market Test Involving Greeting Cards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... businesses will produce and distribute pre-approved envelopes according to specific design requirements which... will produce and distribute pre-approved envelopes with specific design requirements that will be... a market test beginning on or about January 1, 2011, of an experimental market dominant product...

  5. Random local temporal structure of category fluency responses.

    PubMed

    Meyer, David J; Messer, Jason; Singh, Tanya; Thomas, Peter J; Woyczynski, Wojbor A; Kaye, Jeffrey; Lerner, Alan J

    2012-04-01

    The Category Fluency Test (CFT) provides a sensitive measurement of cognitive capabilities in humans related to retrieval from semantic memory. In particular, it is widely used to assess progress of cognitive impairment in patients with dementia. Previous research shows that, in the first approximation, the intensity of tested individuals' responses within a standard 60-s test period decays exponentially with time, with faster decay rates for more cognitively impaired patients. Such decay rate can then be viewed as a global (macro) diagnostic parameter of each test. In the present paper we focus on the statistical properties of the properly de-trended time intervals between consecutive responses (inter-call times) in the Category Fluency Test. In a sense, those properties reflect the local (micro) structure of the response generation process. We find that a good approximation for the distribution of the de-trended inter-call times is provided by the Weibull Distribution, a probability distribution that appears naturally in this context as a distribution of a minimum of independent random quantities and is the standard tool in industrial reliability theory. This insight leads us to a new interpretation of the concept of "navigating a semantic space" via patient responses.

  6. Research study on multi-KW-DC distribution system

    NASA Technical Reports Server (NTRS)

    Berkery, E. A.; Krausz, A.

    1975-01-01

    A detailed definition of the HVDC test facility and the equipment required to implement the test program are provided. The basic elements of the test facility are illustrated, and consist of: the power source, conventional and digital supervision and control equipment, power distribution harness and simulated loads. The regulated dc power supplies provide steady-state power up to 36 KW at 120 VDC. Power for simulated line faults will be obtained from two banks of 90 ampere-hour lead-acid batteries. The relative merits of conventional and multiplexed power control will be demonstrated by the Supervision and Monitor Unit (SMU) and the Automatically Controlled Electrical Systems (ACES) hardware. The distribution harness is supported by a metal duct which is bonded to all component structures and functions as the system ground plane. The load banks contain passive resistance and reactance loads, solid state power controllers and active pulse width modulated loads. The HVDC test facility is designed to simulate a power distribution system for large aerospace vehicles.

  7. Prediction of Mean and Design Fatigue Lives of Self Compacting Concrete Beams in Flexure

    NASA Astrophysics Data System (ADS)

    Goel, S.; Singh, S. P.; Singh, P.; Kaushik, S. K.

    2012-02-01

    In this paper, result of an investigation conducted to study the flexural fatigue characteristics of self compacting concrete (SCC) beams in flexure are presented. An experimental programme was planned in which approximately 60 SCC beam specimens of size 100 × 100 × 500 mm were tested under flexural fatigue loading. Approximately 45 static flexural tests were also conducted to facilitate fatigue testing. The flexural fatigue and static flexural strength tests were conducted on a 100 kN servo-controlled actuator. The fatigue life data thus obtained have been used to establish the probability distributions of fatigue life of SCC using two-parameter Weibull distribution. The parameters of the Weibull distribution have been obtained by different methods of analysis. Using the distribution parameters, the mean and design fatigue lives of SCC have been estimated and compared with Normally vibrated concrete (NVC), the data for which have been taken from literature. It has been observed that SCC exhibits higher mean and design fatigue lives compared to NVC.

  8. Multiple comparisons permutation test for image based data mining in radiotherapy

    PubMed Central

    2013-01-01

    Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy. PMID:24365155

  9. Test results management and distributed cognition in electronic health record-enabled primary care.

    PubMed

    Smith, Michael W; Hughes, Ashley M; Brown, Charnetta; Russo And, Elise; Giardina, Traber D; Mehta, Praveen; Singh, Hardeep

    2018-06-01

    Managing abnormal test results in primary care involves coordination across various settings. This study identifies how primary care teams manage test results in a large, computerized healthcare system in order to inform health information technology requirements for test results management and other distributed healthcare services. At five US Veterans Health Administration facilities, we interviewed 37 primary care team members, including 16 primary care providers, 12 registered nurses, and 9 licensed practical nurses. We performed content analysis using a distributed cognition approach, identifying patterns of information transmission across people and artifacts (e.g. electronic health records). Results illustrate challenges (e.g. information overload) as well as strategies used to overcome challenges. Various communication paths were used. Some team members served as intermediaries, processing information before relaying it. Artifacts were used as memory aids. Health information technology should address the risks of distributed work by supporting awareness of team and task status for reliable management of results.

  10. Kolmogorov-Smirnov test for spatially correlated data

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  11. Developing and Testing Simulated Occupational Experiences for Distributive Education Students in Rural Communities: Volume III: Training Plans: Final Report.

    ERIC Educational Resources Information Center

    Virginia Polytechnic Inst. and State Univ., Blacksburg.

    Volume 3 of a three volume final report presents prototype job training plans developed as part of a research project which pilot tested a distributive education program for rural schools utilizing a retail store simulation plan. The plans are for 15 entry-level and 15 career-level jobs in seven categories of distributive business (department…

  12. When "t"-Tests or Wilcoxon-Mann-Whitney Tests Won't Do

    ERIC Educational Resources Information Center

    McElduff, Fiona; Cortina-Borja, Mario; Chan, Shun-Kai; Wade, Angie

    2010-01-01

    "t"-Tests are widely used by researchers to compare the average values of a numeric outcome between two groups. If there are doubts about the suitability of the data for the requirements of a "t"-test, most notably the distribution being non-normal, the Wilcoxon-Mann-Whitney test may be used instead. However, although often…

  13. Children Becoming More Intelligent: Can the Flynn Effect Be Generalized to Other Child Intelligence Tests?

    ERIC Educational Resources Information Center

    Resing, Wilma C. M.; Tunteler, Erika

    2007-01-01

    In this article, time effects on intelligence test scores have been investigated. In particular, we examined whether the "Flynn effect" is manifest in children from the middle and higher IQ distribution range, measured with a child intelligence test based on information processing principles--the Leiden Diagnostic Test. The test was administered…

  14. Analysis of quantitative data obtained from toxicity studies showing non-normal distribution.

    PubMed

    Kobayashi, Katsumi

    2005-05-01

    The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.

  15. Sodium-sulfur battery flight experiment definition study

    NASA Technical Reports Server (NTRS)

    Chang, Rebecca; Minck, Robert

    1990-01-01

    Sodium-sulfur batteries are considered to be one of the most likely battery systems for space applications. Compared with the Ni-H2 or Ni-Co battery systems, Na-S batteries offer a mass reduction by a factor of 2 to 4, representing significant launch cost savings or increased payload mass capabilities. The Na-S battery operates at between 300 and 400 C, using liquid sodium and sulfur/polysulfide electrodes and solid ceramic electrolyte; the transport of the electrode materials to the surface of the electrolyte is through wicking/capillary forces. This paper describes five tests identified for the Na-S battery flight experiment definition study, which include the cell characterization test, the reactant distribution test, the current/temperature distribution test, the freeze/thaw test, and the multicell LEO test. A schematic diagram of Na-S cell is included.

  16. Determining solid-fluid interface temperature distribution during phase change of cryogenic propellants using transient thermal modeling

    NASA Astrophysics Data System (ADS)

    Bellur, K.; Médici, E. F.; Hermanson, J. C.; Choi, C. K.; Allen, J. S.

    2018-04-01

    Control of boil-off of cryogenic propellants is a continuing technical challenge for long duration space missions. Predicting phase change rates of cryogenic liquids requires an accurate estimation of solid-fluid interface temperature distributions in regions where a contact line or a thin liquid film exists. This paper described a methodology to predict inner wall temperature gradients with and without evaporation using discrete temperature measurements on the outer wall of a container. Phase change experiments with liquid hydrogen and methane in cylindrical test cells of various materials and sizes were conducted at the Neutron Imaging Facility at the National Institute of Standards and Technology. Two types of tests were conducted. The first type of testing involved thermal cycling of an evacuated cell (dry) and the second involved controlled phase change with cryogenic liquids (wet). During both types of tests, temperatures were measured using Si-diode sensors mounted on the exterior surface of the test cells. Heat is transferred to the test cell by conduction through a helium exchange gas and through the cryostat sample holder. Thermal conduction through the sample holder is shown to be the dominant mode with the rate of heat transfer limited by six independent contact resistances. An iterative methodology is employed to determine contact resistances between the various components of the cryostat stick insert, test cell and lid using the dry test data. After the contact resistances are established, inner wall temperature distributions during wet tests are calculated.

  17. Analysis of shear test method for composite laminates

    NASA Technical Reports Server (NTRS)

    Bergner, H. W., Jr.; Davis, J. G., Jr.; Herakovich, C. T.

    1977-01-01

    An elastic plane stress finite element analysis of the stress distributions in four flat test specimens for in-plane shear response of composite materials subjected to mechanical or thermal loads is presented. The shear test specimens investigated include: slotted coupon, cross beam, losipescu, and rail shear. Results are presented in the form of normalized shear contour plots for all three in-plane stess components. It is shown that the cross beam, losipescu, and rail shear specimens have stress distributions which are more than adequate for determining linear shear behavior of composite materials. Laminate properties, core effects, and fixture configurations are among the factors which were found to influence the stress distributions.

  18. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    PubMed

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  19. Modeling the Extremely Lightweight Zerodur Mirror (ELZM) Thermal Soak Test

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip

    2017-01-01

    Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.

  20. Building America Case Study: Ventilation System Effectiveness and Tested Indoor Air Quality Impacts, Tyler, Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ?Ventilation system effectiveness testing was conducted at two unoccupied, single-family, detached lab homes at the University of Texas - Tyler. Five ventilation system tests were conducted with various whole-building ventilation systems. Multizone fan pressurization testing characterized building and zone enclosure leakage. PFT testing showed multizone air change rates and interzonal airflow filtration. Indoor air recirculation by a central air distribution system can help improve the exhaust ventilation system by way of air mixing and filtration. In contrast, the supply and balanced ventilation systems showed that there is a significant benefit to drawing outside air from a known outside location, andmore » filtering and distributing that air. Compared to the Exhaust systems, the CFIS and ERV systems showed better ventilation air distribution and lower concentrations of particulates, formaldehyde and other VOCs. System improvement percentages were estimated based on four System Factor Categories: Balance, Distribution, Outside Air Source, and Recirculation Filtration. Recommended System Factors could be applied to reduce ventilation fan airflow rates relative to ASHRAE Standard 62.2 to save energy and reduce moisture control risk in humid climates. HVAC energy savings were predicted to be 8-10%, or $50-$75/year. Cumulative particle counts for six particle sizes, and formaldehyde and other Top 20 VOC concentrations were measured in multiple zones. The testing showed that single-point exhaust ventilation was inferior as a whole-house ventilation strategy.« less

  1. Modeling the Extremely Lightweight Zerodur Mirror (ELZM) thermal soak test

    NASA Astrophysics Data System (ADS)

    Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip

    2017-09-01

    Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.

  2. Intersocietal Accreditation Commission Accreditation Status of Outpatient Cerebrovascular Testing Facilities Among Medicare Beneficiaries: The VALUE Study.

    PubMed

    Brown, Scott C; Wang, Kefeng; Dong, Chuanhui; Farrell, Mary Beth; Heller, Gary V; Gornik, Heather L; Hutchisson, Marge; Needleman, Laurence; Benenati, James F; Jaff, Michael R; Meier, George H; Perese, Susana; Bendick, Phillip; Hamburg, Naomi M; Lohr, Joann M; LaPerna, Lucy; Leers, Steven A; Lilly, Michael P; Tegeler, Charles; Katanick, Sandra L; Alexandrov, Andrei V; Siddiqui, Adnan H; Rundek, Tatjana

    2016-09-01

    Accreditation of cerebrovascular ultrasound laboratories by the Intersocietal Accreditation Commission (IAC) and equivalent organizations is supported by the Joint Commission certification of stroke centers. Limited information exists on the accreditation status and geographic distribution of cerebrovascular testing facilities in the United States. Our study objectives were to identify the proportion of IAC-accredited outpatient cerebrovascular testing facilities used by Medicare beneficiaries, describe their geographic distribution, and identify variations in cerebrovascular testing procedure types and volumes by accreditation status. As part of the VALUE (Vascular Accreditation, Location, and Utilization Evaluation) Study, we examined the proportion of IAC-accredited facilities that conducted cerebrovascular testing in a 5% Centers for Medicare and Medicaid Services random Outpatient Limited Data Set in 2011 and investigated their geographic distribution using geocoding. Among 7327 outpatient facilities billing Medicare for cerebrovascular testing, only 22% (1640) were IAC accredited. The proportion of IAC-accredited cerebrovascular testing facilities varied by region (χ(2)[3] = 177.1; P < .0001), with 29%, 15%, 13%, and 10% located in the Northeast, South, Midwest, and West, respectively. However, of the total number of cerebrovascular outpatient procedures conducted in 2011 (38,555), 40% (15,410) were conducted in IAC-accredited facilities. Most cerebrovascular testing procedures were carotid duplex, with 40% of them conducted in IAC-accredited facilities. The proportion of facilities conducting outpatient cerebrovascular testing accredited by the IAC is low and varies by region. The growing number of certified stroke centers should be accompanied by more accredited outpatient vascular testing facilities, which could potentially improve the quality of stroke care.

  3. Laboratory evaluation of the Sequoia Scientific LISST-ABS acoustic backscatter sediment sensor

    USGS Publications Warehouse

    Snazelle, Teri T.

    2017-12-18

    Sequoia Scientific’s LISST-ABS is an acoustic backscatter sensor designed to measure suspended-sediment concentration at a point source. Three LISST-ABS were evaluated at the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF). Serial numbers 6010, 6039, and 6058 were assessed for accuracy in solutions with varying particle-size distributions and for the effect of temperature on sensor accuracy. Certified sediment samples composed of different ranges of particle size were purchased from Powder Technology Inc. These sediment samples were 30–80-micron (µm) Arizona Test Dust; less than 22-µm ISO 12103-1, A1 Ultrafine Test Dust; and 149-µm MIL-STD 810E Silica Dust. The sensor was able to accurately measure suspended-sediment concentration when calibrated with sediment of the same particle-size distribution as the measured. Overall testing demonstrated that sensors calibrated with finer sized sediments overdetect sediment concentrations with coarser sized sediments, and sensors calibrated with coarser sized sediments do not detect increases in sediment concentrations from small and fine sediments. These test results are not unexpected for an acoustic-backscatter device and stress the need for using accurate site-specific particle-size distributions during sensor calibration. When calibrated for ultrafine dust with a less than 22-µm particle size (silt) and with the Arizona Test Dust with a 30–80-µm range, the data from sensor 6039 were biased high when fractions of the coarser (149-µm) Silica Dust were added. Data from sensor 6058 showed similar results with an elevated response to coarser material when calibrated with a finer particle-size distribution and a lack of detection when subjected to finer particle-size sediment. Sensor 6010 was also tested for the effect of dissimilar particle size during the calibration and showed little effect. Subsequent testing revealed problems with this sensor, including an inadequate temperature compensation, making this data questionable. The sensor was replaced by Sequoia Scientific with serial number 6039. Results from the extended temperature testing showed proper temperature compensation for sensor 6039, and results from the dissimilar calibration/testing particle-size distribution closely corroborated the results from sensor 6058.

  4. 21 CFR 211.165 - Testing and release for distribution.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 4 2012-04-01 2012-04-01 false Testing and release for distribution. 211.165 Section 211.165 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Laboratory...

  5. 21 CFR 211.165 - Testing and release for distribution.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 4 2013-04-01 2013-04-01 false Testing and release for distribution. 211.165 Section 211.165 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Laboratory...

  6. 21 CFR 211.165 - Testing and release for distribution.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 4 2011-04-01 2011-04-01 false Testing and release for distribution. 211.165 Section 211.165 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Laboratory...

  7. 21 CFR 211.165 - Testing and release for distribution.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 4 2014-04-01 2014-04-01 false Testing and release for distribution. 211.165 Section 211.165 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Laboratory...

  8. Estimation of Reliability Coefficients Using the Test Information Function and Its Modifications.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1994-01-01

    The reliability coefficient is predicted from the test information function (TIF) or two modified TIF formulas and a specific trait distribution. Examples illustrate the variability of the reliability coefficient across different trait distributions, and results are compared with empirical reliability coefficients. (SLD)

  9. A comparison of likelihood ratio tests and Rao's score test for three separable covariance matrix structures.

    PubMed

    Filipiak, Katarzyna; Klein, Daniel; Roy, Anuradha

    2017-01-01

    The problem of testing the separability of a covariance matrix against an unstructured variance-covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first-order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ 2 distribution. The tests are implemented on a real dataset from medical studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Ventilation System Effectiveness and Tested Indoor Air Quality Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudd, Armin; Bergey, Daniel

    In this project, Building America research team Building Science Corporation tested the effectiveness of ventilation systems at two unoccupied, single-family, detached lab homes at the University of Texas - Tyler. Five ventilation system tests were conducted with various whole-building ventilation systems. Multizone fan pressurization testing characterized building and zone enclosure leakage. PFT testing showed multizone air change rates and interzonal airflow. Cumulative particle counts for six particle sizes, and formaldehyde and other Top 20 VOC concentrations were measured in multiple zones. The testing showed that single-point exhaust ventilation was inferior as a whole-house ventilation strategy. This was because the sourcemore » of outside air was not direct from outside, the ventilation air was not distributed, and no provision existed for air filtration. Indoor air recirculation by a central air distribution system can help improve the exhaust ventilation system by way of air mixing and filtration. In contrast, the supply and balanced ventilation systems showed that there is a significant benefit to drawing outside air from a known outside location, and filtering and distributing that air. Compared to the exhaust systems, the CFIS and ERV systems showed better ventilation air distribution and lower concentrations of particulates, formaldehyde and other VOCs. System improvement percentages were estimated based on four system factor categories: balance, distribution, outside air source, and recirculation filtration. Recommended system factors could be applied to reduce ventilation fan airflow rates relative to ASHRAE Standard 62.2 to save energy and reduce moisture control risk in humid climates. HVAC energy savings were predicted to be 8-10%, or $50-$75/year.« less

  11. Ventilation System Effectiveness and Tested Indoor Air Quality Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudd, Armin; Bergey, Daniel

    Ventilation system effectiveness testing was conducted at two unoccupied, single-family, detached lab homes at the University of Texas - Tyler. Five ventilation system tests were conducted with various whole-building ventilation systems. Multizone fan pressurization testing characterized building and zone enclosure leakage. PFT testing showed multizone air change rates and interzonal airflow. Cumulative particle counts for six particle sizes, and formaldehyde and other Top 20 VOC concentrations were measured in multiple zones. The testing showed that single-point exhaust ventilation was inferior as a whole-house ventilation strategy. It was inferior because the source of outside air was not direct from outside, themore » ventilation air was not distributed, and no provision existed for air filtration. Indoor air recirculation by a central air distribution system can help improve the exhaust ventilation system by way of air mixing and filtration. In contrast, the supply and balanced ventilation systems showed that there is a significant benefit to drawing outside air from a known outside location, and filtering and distributing that air. Compared to the Exhaust systems, the CFIS and ERV systems showed better ventilation air distribution and lower concentrations of particulates, formaldehyde and other VOCs. System improvement percentages were estimated based on four System Factor Categories: Balance, Distribution, Outside Air Source, and Recirculation Filtration. Recommended System Factors could be applied to reduce ventilation fan airflow rates relative to ASHRAE Standard 62.2 to save energy and reduce moisture control risk in humid climates. HVAC energy savings were predicted to be 8-10%, or $50-$75/year.« less

  12. Diagnosis of growth hormone deficiency by using the arginine provocative test: is it possible to shorten testing time without altering validity?

    PubMed

    Galluzzi, Fiorella; Quaranta, Maria Rita; Salti, Roberto; Stagi, Stefano; Nanni, Laura; Seminara, Salvatore

    2009-01-01

    The arginine test is used for the diagnosis of growth hormone deficiency (GHD), but its duration is not uniform and varies from 180 to 90 min. To standardize this test, evaluating the possibility to shorten it to 90 min, we investigated the response of GH to the arginine test in 208 children evaluated for short stature (height less than -2 SD); 67 were diagnosed with idiopathic short stature (ISS) and 141 with GHD. We calculated the frequency distribution of the GH peaks to arginine in GHD and in ISS at various times and the percentage of GH peaks to arginine before and after 90 min in all and in ISS children. The GH peak distribution varied between 30 and 120 min, even though the vast majority of peaks occurred between 30 and 90 min. There was no significant difference (p > 0.05) in the peak distribution between ISS and GHD children. The percentages of GH peaks within 90 min were 95.2% in all children and 100% in ISS. The arginine test can be administered for only 90 min without significantly changing its validity, in order to reduce the discomfort of patients and the cost of the test. Copyright 2009 S. Karger AG, Basel.

  13. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    PubMed

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel

  14. Flight Investigation of the Cooling Characteristics of a Two-Row Radial Engine Installation. 2 - Cooling-Air Pressure Recovery and Pressure Distribution

    DTIC Science & Technology

    1946-07-01

    good distribution of cooling air, as well as minimum drag for the installation. The fact that these tests showed that the front recovery decreased...installations on engine cooling-air distribution indicates that good coin-elation of the cooling results of like engines in different installations...tests indicate that an important consider- ation in the design of cowlings and cowl flaps should be the obtaining of good distribution of cooling air

  15. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  16. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  17. Kernel Equating Under the Non-Equivalent Groups With Covariates Design

    PubMed Central

    Bränberg, Kenny

    2015-01-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests. PMID:29881012

  18. Kernel Equating Under the Non-Equivalent Groups With Covariates Design.

    PubMed

    Wiberg, Marie; Bränberg, Kenny

    2015-07-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests.

  19. Pressure distributions from subsonic tests of an advanced laminar-flow-control wing with leading- and trailing-edge flaps

    NASA Technical Reports Server (NTRS)

    Applin, Zachary T.; Gentry, Garl L., Jr.

    1988-01-01

    An unswept, semispan wing model equipped with full-span leading- and trailing-edge flaps was tested in the Langley 14- by 22-Foot Subsonic Tunnel to determine the effect of high-lift components on the aerodynamics of an advanced laminar-flow-control (LFC) airfoil section. Chordwise pressure distributions near the midsemispan were measured for four configurations: cruise, trailing-edge flap only, and trailing-edge flap with a leading-edge Krueger flap of either 0.10 or 0.12 chord. Part 1 of this report (under separate cover) presents a representative sample of the plotted pressure distribution data for each configuration tested. Part 2 presents the entire set of plotted and tabulated pressure distribution data. The data are presented without analysis.

  20. 49 CFR 178.1055 - Stacking test.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and no loss of contents during the test or after removal of the test load. ... to a uniformly distributed superimposed test load that is four times the design type maximum gross weight for a period of at least twenty-four hours. (2) For all Flexible Bulk Containers, the load must be...

  1. 49 CFR 178.1055 - Stacking test.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and no loss of contents during the test or after removal of the test load. ... to a uniformly distributed superimposed test load that is four times the design type maximum gross weight for a period of at least twenty-four hours. (2) For all Flexible Bulk Containers, the load must be...

  2. 40 CFR 63.11092 - What testing and monitoring requirements must I meet?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false What testing and monitoring... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Testing and Monitoring Requirements § 63.11092 What testing and monitoring requirements must I meet? (a) Each owner or operator of a bulk...

  3. 40 CFR 63.11092 - What testing and monitoring requirements must I meet?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false What testing and monitoring... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Testing and Monitoring Requirements § 63.11092 What testing and monitoring requirements must I meet? (a) Each owner or operator of a bulk...

  4. 40 CFR 63.11092 - What testing and monitoring requirements must I meet?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false What testing and monitoring... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Testing and Monitoring Requirements § 63.11092 What testing and monitoring requirements must I meet? (a) Each owner or operator of a bulk...

  5. 40 CFR 63.11092 - What testing and monitoring requirements must I meet?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false What testing and monitoring... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Testing and Monitoring Requirements § 63.11092 What testing and monitoring requirements must I meet? (a) Each owner or operator subject to the...

  6. 40 CFR 63.11092 - What testing and monitoring requirements must I meet?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false What testing and monitoring... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Testing and Monitoring Requirements § 63.11092 What testing and monitoring requirements must I meet? (a) Each owner or operator of a bulk...

  7. Getting the Help We Need

    ERIC Educational Resources Information Center

    Haertel, Edward

    2013-01-01

    In validating uses of testing, it is helpful to distinguish those that rely directly on the information provided by scores or score distributions ("direct" uses and consequences) versus those that instead capitalize on the motivational effects of testing, or use testing and test reporting to shape public opinion ("indirect" uses and consequences).…

  8. 39 CFR 501.9 - Demonstration or test Postage Evidencing Systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MANUFACTURE AND DISTRIBUTE POSTAGE EVIDENCING SYSTEMS § 501.9 Demonstration or test Postage Evidencing Systems. (a) A demonstration or test postage evidencing system is any system that produces an image that... 39 Postal Service 1 2010-07-01 2010-07-01 false Demonstration or test Postage Evidencing Systems...

  9. Modeling Reliability Growth in Accelerated Stress Testing

    DTIC Science & Technology

    2013-12-01

    MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING DISSERTATION Jason K. Freels Major...Defense, or the United States Government. AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING ...DISTRIBUTION UNLIMITED AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING Jason K. Freels

  10. Rescuing Computerized Testing by Breaking Zipf's Law.

    ERIC Educational Resources Information Center

    Wainer, Howard

    2000-01-01

    Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…

  11. Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening.

    PubMed

    Ong, Jia Hoong; Burnham, Denis; Escudero, Paola

    2015-01-01

    This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants' auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution.

  12. Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening

    PubMed Central

    Ong, Jia Hoong; Burnham, Denis; Escudero, Paola

    2015-01-01

    This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants’ auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution. PMID:26214002

  13. Optimum structural design based on reliability and proof-load testing

    NASA Technical Reports Server (NTRS)

    Shinozuka, M.; Yang, J. N.

    1969-01-01

    Proof-load test eliminates structures with strength less than the proof load and improves the reliability value in analysis. It truncates the distribution function of strength at the proof load, thereby alleviating verification of a fitted distribution function at the lower tail portion where data are usually nonexistent.

  14. Distribution Tables and Private Tests: The Failure of Middle School Reform in Japan.

    ERIC Educational Resources Information Center

    LeTendre, Gerald K.

    1994-01-01

    In November 1992, Japanese Ministry of Education declared middle school teachers could no longer use distribution tables produced by private testing companies to predetermine high school students' curricula. Failure to implement reform stems from structural and cultural roots. By presorting students and molding their expectations, traditional…

  15. Ecology of the Nevada Test Site. I. Geographic and ecologic distributions of the vascular flora (annotated checklist)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beatley, J C

    1965-04-01

    A checklist of vascular plants of the Nevada Test Site is presented for use in studies of plant ecology. Data on the occurrence and distribution of plant species are included. Collections were made from both undisturbed and disturbed sites.

  16. 49 CFR 178.812 - Top lift test.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...

  17. 49 CFR 178.812 - Top lift test.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...

  18. 49 CFR 178.812 - Top lift test.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...

  19. 49 CFR 178.812 - Top lift test.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...

  20. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  1. USING PARTIAL LEAST SQUARES REGRESSION TO OBTAIN COTTON FIBER LENGTH DISTRIBUTIONS FROM THE BEARD TESTING METHOD

    USDA-ARS?s Scientific Manuscript database

    The beard testing method for measuring cotton fiber length is based on the fibrogram theory. However, in the instrumental implementations, the engineering complexity alters the original fiber length distribution observed by the instrument. This causes challenges in obtaining the entire original le...

  2. Bayesian inference for disease prevalence using negative binomial group testing

    PubMed Central

    Pritchard, Nicholas A.; Tebbs, Joshua M.

    2011-01-01

    Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308

  3. Largo hot water system long range thermal performance test report, addendum

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The test procedure used and the test results obtained during the long range thermal performance tests of the LARGO Solar Hot Water System under natural environmental conditions are presented. Objectives of these tests were to determine the amount of energy collected, the amount of power required for system operation, system efficiency, temperature distribution, and system performance degradation.

  4. An empirical likelihood ratio test robust to individual heterogeneity for differential expression analysis of RNA-seq.

    PubMed

    Xu, Maoqi; Chen, Liang

    2018-01-01

    The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. ETICS: the international software engineering service for the grid

    NASA Astrophysics Data System (ADS)

    Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.

    2008-07-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.

  6. Intratumor distribution and test-retest comparisons of physiological parameters quantified by dynamic contrast-enhanced MRI in rat U251 glioma.

    PubMed

    Aryal, Madhava P; Nagaraja, Tavarekere N; Brown, Stephen L; Lu, Mei; Bagher-Ebadian, Hassan; Ding, Guangliang; Panda, Swayamprava; Keenan, Kelly; Cabral, Glauber; Mikkelsen, Tom; Ewing, James R

    2014-10-01

    The distribution of dynamic contrast-enhanced MRI (DCE-MRI) parametric estimates in a rat U251 glioma model was analyzed. Using Magnevist as contrast agent (CA), 17 nude rats implanted with U251 cerebral glioma were studied by DCE-MRI twice in a 24 h interval. A data-driven analysis selected one of three models to estimate either (1) plasma volume (vp), (2) vp and forward volume transfer constant (K(trans)) or (3) vp, K(trans) and interstitial volume fraction (ve), constituting Models 1, 2 and 3, respectively. CA distribution volume (VD) was estimated in Model 3 regions by Logan plots. Regions of interest (ROIs) were selected by model. In the Model 3 ROI, descriptors of parameter distributions--mean, median, variance and skewness--were calculated and compared between the two time points for repeatability. All distributions of parametric estimates in Model 3 ROIs were positively skewed. Test-retest differences between population summaries for any parameter were not significant (p ≥ 0.10; Wilcoxon signed-rank and paired t tests). These and similar measures of parametric distribution and test-retest variance from other tumor models can be used to inform the choice of biomarkers that best summarize tumor status and treatment effects. Copyright © 2014 John Wiley & Sons, Ltd.

  7. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  8. Mass spectrometric gas composition measurements associated with jet interaction tests in a high-enthalpy wind tunnel

    NASA Technical Reports Server (NTRS)

    Lewis, B. W.; Brown, K. G.; Wood, G. M., Jr.; Puster, R. L.; Paulin, P. A.; Fishel, C. E.; Ellerbe, D. A.

    1986-01-01

    Knowledge of test gas composition is important in wind-tunnel experiments measuring aerothermodynamic interactions. This paper describes measurements made by sampling the top of the test section during runs of the Langley 7-Inch High-Temperature Tunnel. The tests were conducted to determine the mixing of gas injected from a flat-plate model into a combustion-heated hypervelocity test stream and to monitor the CO2 produced in the combustion. The Mass Spectrometric (MS) measurements yield the mole fraction of N2 or He and CO2 reaching the sample inlets. The data obtained for several tunnel run conditions are related to the pressures measured in the tunnel test section and at the MS ionizer inlet. The apparent distributions of injected gas species and tunnel gas (CO2) are discussed relative to the sampling techniques. The measurements provided significant real-time data for the distribution of injected gases in the test section. The jet N2 diffused readily from the test stream, but the jet He was mostly entrained. The amounts of CO2 and Ar diffusing upward in the test section for several run conditions indicated the variability of the combustion-gas test-stream composition.

  9. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness and kurtosis both tend to zero for large earthquake rates: for the Gaussian law, these values are identically zero. A calculation of the NBD skewness and kurtosis levels based on the values of the first two statistical moments of the distribution, shows rapid increase of these upper moments levels. However, the observed catalogue values of skewness and kurtosis are rising even faster. This means that for small time intervals, the earthquake number distribution is even more heavy-tailed than the NBD predicts. Therefore for small time intervals, we propose using empirical number distributions appropriately smoothed for testing forecasted earthquake numbers.

  10. Flight Investigation of the Cooling Characteristics of a Two-row Radial Engine Installation III : Engine Temperature Distribution

    NASA Technical Reports Server (NTRS)

    Rennak, Robert M; Messing, Wesley E; Morgan, James E

    1946-01-01

    The temperature distribution of a two-row radial engine in a twin-engine airplane has been investigated in a series of flight tests. The test engine was operated over a wide range of conditions at density altitudes of 5000 and 20,000 feet; quantitative results are presented showing the effects of flight and engine variables upon average engine temperature and over-all temperature spread. Discussions of the effect of the variables on the shape of the temperature patterns and on the temperature distribution of individual cylinders are also included. The results indicate that, for the tests conducted, the temperature distribution patterns were chiefly determined by the fuel-air ratio and cooling-air distributions. It was possible to calculate individual cylinder temperature, on the assumption of equal power distribution among cylinders, to within an average of plus or minus 14 degrees F. of the actual temperature. A considerable change occurred in either the spread or the thrust axis, the average engine fuel-air ratio, the engine speed, the power, or the blower ratio. Smaller effects on the temperature pattern were noticed with a change in cowl-flap opening and altitude. In most of the tests, a change in conditions affected the temperature of the barrels less than that of the heads. The variation of flight and engine variables had a negligible effect on the temperature distributions of the individual cylinders. (author)

  11. Distribution of Spiked Drugs between Milk Fat, Skim Milk, Whey, Curd, and Milk Protein Fractions: Expansion of Partitioning Models.

    PubMed

    Lupton, Sara J; Shappell, Nancy W; Shelver, Weilin L; Hakk, Heldur

    2018-01-10

    The distributions of eight drugs (acetaminophen, acetylsalicylic acid/salicylic acid, ciprofloxacin, clarithromycin, flunixin, phenylbutazone, praziquantel, and thiamphenicol) were determined in milk products (skim milk, milk fat, curd, whey, and whey protein) and used to expand a previous model (from 7 drugs to 15 drugs) for predicting drug distribution. Phenylbutazone and praziquantel were found to distribute with the lipid and curd phases (≥50%). Flunixin distribution was lower but similar in direction (12% in milk fat, 39% in curd). Acetaminophen, ciprofloxacin, and praziquantel preferentially associated with casein proteins, whereas thiamphenicol and clarithromycin associated preferentially to whey proteins. Regression analyses for log [milk fat]/[skim milk] and log [curd]/[whey] had r 2 values of 0.63 and 0.67, respectively, with p of <0.001 for 15 drugs (7 previously tested and 8 currently tested). The robustness of the distribution model was enhanced by doubling the number of drugs originally tested.

  12. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  13. 77 FR 13329 - Pandemic Influenza Vaccines-Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-06

    ... Secretary must consider the desirability of encouraging the design, development, clinical testing or... manufacture, testing, development, distribution, administration, or use of one or more Covered Countermeasures... in the design, development, clinical testing, investigation or manufacturing of a Covered...

  14. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  15. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    PubMed

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  16. Automation of the space station core module power management and distribution system

    NASA Technical Reports Server (NTRS)

    Weeks, David J.

    1988-01-01

    Under the Advanced Development Program for Space Station, Marshall Space Flight Center has been developing advanced automation applications for the Power Management and Distribution (PMAD) system inside the Space Station modules for the past three years. The Space Station Module Power Management and Distribution System (SSM/PMAD) test bed features three artificial intelligence (AI) systems coupled with conventional automation software functioning in an autonomous or closed-loop fashion. The AI systems in the test bed include a baseline scheduler/dynamic rescheduler (LES), a load shedding management system (LPLMS), and a fault recovery and management expert system (FRAMES). This test bed will be part of the NASA Systems Autonomy Demonstration for 1990 featuring cooperating expert systems in various Space Station subsystem test beds. It is concluded that advanced automation technology involving AI approaches is sufficiently mature to begin applying the technology to current and planned spacecraft applications including the Space Station.

  17. Interfacial stress state present in a 'thin-slice' fibre push-out test

    NASA Technical Reports Server (NTRS)

    Kallas, M. N.; Koss, D. A.; Hahn, H. T.; Hellmann, J. R.

    1992-01-01

    An analysis of the stress distributions along the fiber-matrix interface in a 'thin-slice' fiber push-out test is presented for selected test geometries. For the small specimen thicknesses often required to displace large-diameter fibers with high interfacial shear strengths, finite element analysis indicates that large bending stresses may be present. The magnitude of these stresses and their spatial distribution can be very sensitive to the test configuration. For certain test geometries, the specimen configuration itself may alter the interfacial failure process from one which initiates due to a maximum in shear stress near the top surface adjacent to the indentor, to one which involves mixed mode crack growth up from the bottom surface and/or yielding within the matrix near the interface.

  18. Posttest examination results of recent treat tests on metal fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, J.W.; Wright, A.E.; Bauer, T.H.

    A series of in-reactor transient tests is underway to study the characteristics of metal-alloy fuel during transient-overpower-without-scam conditions. The initial tests focused on determining the margin to cladding breach and the axial fuel motions that would mitigate the power excursion. The tests were conducted in flowing-sodium loops with uranium - 5% fissium EBR-II Mark-II driver fuel elements in the TREAT facility. Posttest examination of the tests evaluated fuel elongation in intact pins and postfailure fuel motion. Microscopic examination of the intact pins studied the nature and extent of fuel/cladding interaction, fuel melt fraction and mass distribution, and distribution of porosity.more » Eutectic penetration and failure of the cladding were also examined in the failed pins.« less

  19. Design, fabrication and test of graphite/polyimide composite joints and attachments for advanced aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Skoumal, D. E.

    1980-01-01

    Bonded and bolted designs are presented for each of four major attachment types. Prepreg processing problems are discussed and quality control data are given for lots 2W4604, 2W4632 and 2W4643. Preliminary design allowables test results for tension tests and compression tests of laminates are included. The final small specimen test matrix is defined and the configuration of symmetric step-lap joint specimens are shown. Finite element modeling studies of a double lap joint were performed to evaluate the number of elements required through the adhesive thickness to assess effects of various joint parameters on stress distributions. Results of finite element analyses assessing the effect of an adhesive fillet on the stress distribution in a double lap joint are examined.

  20. Atmospheric Probe Model: Construction and Wind Tunnel Tests

    NASA Technical Reports Server (NTRS)

    Vogel, Jerald M.

    1998-01-01

    The material contained in this document represents a summary of the results of a low speed wind tunnel test program to determine the performance of an atmospheric probe at low speed. The probe configuration tested consists of a 2/3 scale model constructed from a combination of hard maple wood and aluminum stock. The model design includes approximately 130 surface static pressure taps. Additional hardware incorporated in the baseline model provides a mechanism for simulating external and internal trailing edge split flaps for probe flow control. Test matrix parameters include probe side slip angle, external/internal split flap deflection angle, and trip strip applications. Test output database includes surface pressure distributions on both inner and outer annular wings and probe center line velocity distributions from forward probe to aft probe locations.

  1. Republic P-47G Thunderbolt Undergoes Ground Testing

    NASA Image and Video Library

    1945-06-21

    A Republic P-47G Thunderbolt is tested with a large blower on the hangar apron at the National Advisory Committee for Aeronautics (NACA) Aircraft Engine Research Laboratory in Cleveland, Ohio. The blower could produce air velocities up to 250 miles per hour. This was strong enough to simulate take-off power and eliminated the need to risk flights with untried engines. The Republic P-47G was loaned to the laboratory to test NACA modifications to the Wright R-2800 engine’s cooling system at higher altitudes. The ground-based tests, seen here, were used to map the engine’s normal operating parameters. The P-47G then underwent an extensive flight test program to study temperature distribution among the engine’s 18 cylinders and develop methods to improve that distribution.

  2. Nondestructive detection and measurement of hydrogen embrittlement

    DOEpatents

    Alex, Franklin; Byrne, Joseph Gerald

    1977-01-01

    A nondestructive system and method for the determination of the presence and extent of hydrogen embrittlement in metals, alloys, and other crystalline structures subject thereto. Positron annihilation characteristics of the positron-electron annihilation within the tested material provide unique energy distribution curves for each type of material tested at each respective stage of hydrogen embrittlement. Gamma radiation resulting from such annihilation events is detected and statistically summarized by appropriate instrumentation to reveal the variations of electron activity within the tested material caused by hydrogen embrittlement therein. Such data from controlled tests provides a direct indication of the relative stages of hydrogen embrittlement in the form of unique energy distribution curves which may be utilized as calibration curves for future comparison with field tests to give on-site indication of progressive stages of hydrogen embrittlement.

  3. NASA Constellation Distributed Simulation Middleware Trade Study

    NASA Technical Reports Server (NTRS)

    Hasan, David; Bowman, James D.; Fisher, Nancy; Cutts, Dannie; Cures, Edwin Z.

    2008-01-01

    This paper presents the results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL.

  4. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  5. Statistical inference methods for two crossing survival curves: a comparison of methods.

    PubMed

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  6. Statistical Inference Methods for Two Crossing Survival Curves: A Comparison of Methods

    PubMed Central

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests. PMID:25615624

  7. A Nonparametric Framework for Comparing Trends and Gaps across Tests

    ERIC Educational Resources Information Center

    Ho, Andrew Dean

    2009-01-01

    Problems of scale typically arise when comparing test score trends, gaps, and gap trends across different tests. To overcome some of these difficulties, test score distributions on the same score scale can be represented by nonparametric graphs or statistics that are invariant under monotone scale transformations. This article motivates and then…

  8. 46 CFR 58.16-19 - Tests.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 2 2012-10-01 2012-10-01 false Tests. 58.16-19 Section 58.16-19 Shipping COAST GUARD... SYSTEMS Liquefied Petroleum Gases for Cooking and Heating § 58.16-19 Tests. (a) Installation. (1) After... tests prescribed in paragraph (a)(1) of this section, the distribution tubing shall be connected to the...

  9. 46 CFR 58.16-19 - Tests.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 2 2013-10-01 2013-10-01 false Tests. 58.16-19 Section 58.16-19 Shipping COAST GUARD... SYSTEMS Liquefied Petroleum Gases for Cooking and Heating § 58.16-19 Tests. (a) Installation. (1) After... tests prescribed in paragraph (a)(1) of this section, the distribution tubing shall be connected to the...

  10. 46 CFR 58.16-19 - Tests.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 2 2014-10-01 2014-10-01 false Tests. 58.16-19 Section 58.16-19 Shipping COAST GUARD... SYSTEMS Liquefied Petroleum Gases for Cooking and Heating § 58.16-19 Tests. (a) Installation. (1) After... tests prescribed in paragraph (a)(1) of this section, the distribution tubing shall be connected to the...

  11. 46 CFR 58.16-19 - Tests.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 2 2011-10-01 2011-10-01 false Tests. 58.16-19 Section 58.16-19 Shipping COAST GUARD... SYSTEMS Liquefied Petroleum Gases for Cooking and Heating § 58.16-19 Tests. (a) Installation. (1) After... tests prescribed in paragraph (a)(1) of this section, the distribution tubing shall be connected to the...

  12. 46 CFR 58.16-19 - Tests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Tests. 58.16-19 Section 58.16-19 Shipping COAST GUARD... SYSTEMS Liquefied Petroleum Gases for Cooking and Heating § 58.16-19 Tests. (a) Installation. (1) After... tests prescribed in paragraph (a)(1) of this section, the distribution tubing shall be connected to the...

  13. 16 CFR 1210.4 - Test protocol.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... live within the United States. (4) The age and sex distribution of each 100-child panel shall be: (i... recorded for each child in the 100-child test panel: (1) Sex (male or female). (2) Date of birth (month... STANDARD FOR CIGARETTE LIGHTERS Requirements for Child Resistance § 1210.4 Test protocol. (a) Child test...

  14. 16 CFR § 1212.4 - Test protocol.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) The children for the test panel shall live within the United States. (4) The age and sex distribution... child in the 100-child test panel: (1) Sex (male or female). (2) Date of birth (month, day, year). (3... STANDARD FOR MULTI-PURPOSE LIGHTERS Requirements for Child-Resistance § 1212.4 Test protocol. (a) Child...

  15. 16 CFR 1210.4 - Test protocol.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... live within the United States. (4) The age and sex distribution of each 100-child panel shall be: (i... recorded for each child in the 100-child test panel: (1) Sex (male or female). (2) Date of birth (month... STANDARD FOR CIGARETTE LIGHTERS Requirements for Child Resistance § 1210.4 Test protocol. (a) Child test...

  16. 16 CFR 1212.4 - Test protocol.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) The children for the test panel shall live within the United States. (4) The age and sex distribution... child in the 100-child test panel: (1) Sex (male or female). (2) Date of birth (month, day, year). (3... STANDARD FOR MULTI-PURPOSE LIGHTERS Requirements for Child-Resistance § 1212.4 Test protocol. (a) Child...

  17. 16 CFR § 1210.4 - Test protocol.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... live within the United States. (4) The age and sex distribution of each 100-child panel shall be: (i... recorded for each child in the 100-child test panel: (1) Sex (male or female). (2) Date of birth (month... STANDARD FOR CIGARETTE LIGHTERS Requirements for Child Resistance § 1210.4 Test protocol. (a) Child test...

  18. 16 CFR 1212.4 - Test protocol.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) The children for the test panel shall live within the United States. (4) The age and sex distribution... child in the 100-child test panel: (1) Sex (male or female). (2) Date of birth (month, day, year). (3... STANDARD FOR MULTI-PURPOSE LIGHTERS Requirements for Child-Resistance § 1212.4 Test protocol. (a) Child...

  19. Multiple-Choice Test Bias Due to Answering Strategy Variation.

    ERIC Educational Resources Information Center

    Frary, Robert B.; Giles, Mary B.

    This paper describes the development and investigation of a new approach to determining the existence of bias in multiple-choice test scores. Previous work in this area has concentrated almost exclusively on bias attributable to specific test items or to differences in test score distributions across racial or ethnic groups. In contrast, the…

  20. Observed-Score Equating as a Test Assembly Problem.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Luecht, Richard M.

    1998-01-01

    Derives a set of linear conditions of item-response functions that guarantees identical observed-score distributions on two test forms. The conditions can be added as constraints to a linear programming model for test assembly. An example illustrates the use of the model for an item pool from the Law School Admissions Test (LSAT). (SLD)

  1. Dtest Testing Software

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  2. Using Patterns of Summed Scores in Paper-and-Pencil Tests and Computer-Adaptive Tests to Detect Misfitting Item Score Patterns

    ERIC Educational Resources Information Center

    Meijer, Rob R.

    2004-01-01

    Two new methods have been proposed to determine unexpected sum scores on sub-tests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted p, was compared with a method where the probability for each score combination was calculated using a…

  3. 10 CFR Appendix A to Subpart K of... - Uniform Test Method for Measuring the Energy Consumption of Distribution Transformers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Uniform Test Method is used to test more than one unit of a basic model to determine the efficiency of... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2014-01-01 2014-01-01 false Uniform Test Method for Measuring the Energy Consumption...

  4. The Influence of Drug Testing and Benefit-Based Distribution of Opioid Substitution Therapy on Drug Abstinence.

    PubMed

    Gabrovec, Branko

    2015-01-01

    The objective of our research was to discover whether the new approach to urine drug testing has a positive effect on users' abstinence, users' treatment, and their cooperation, while remaining user-friendly, and whether this approach is more cost-effective. The centers are focused on providing high-quality treatment within a cost-efficient program. In this study, we focus on the influence of drug testing and benefit-based distribution of opioid substitution therapy (BBDOST) on drug abstinence. The purpose of this study was to find any possible positive effect of modified distribution of the therapy and illicit drug testing on the number of users who are abstinent from illicit drugs and users who are not abstinent from illicit drugs as well as the users' opinion on BBDOST and testing. We are also interested in a difference in abstinence rates between those on BBDOST and those not receiving BBDOST. In 2010, the method of drug testing at the center was changed (less frequent and random drug testing) to enable its users faster access to BBDOST (take-home therapy). It was found that the number of drug-abstinent program participants has increased from initial 44.5% (2010) to 54.1% (2014). According to the program participants, the new method allows them to achieve and maintain abstinence from drugs more easily. In addition, they are also satisfied with the modified way of drug testing. This opinion does not change with age, gender, and acquired benefits.

  5. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Yu, Carol C.

    2015-01-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…

  6. Distributing an Online Catalog on CD-ROM...The University of Illinois Experience.

    ERIC Educational Resources Information Center

    Watson, Paula D.; Golden, Gary A.

    1987-01-01

    Description of the planning of a project designed to test the feasibility of distributing a statewide union catalog database on optical disk discusses the relationship of the project's goals to those of statewide library development; dealing with vendors in a volatile, high technology industry; and plans for testing and evaluation. (EM)

  7. 10 CFR 429.70 - Alternative methods for determining energy efficiency or energy use.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... of commercial HVAC and WH equipment, distribution transformers, and central air conditioners and heat... overrate the efficiency of a basic model. For each basic model of distribution transformer that has a... voltage at which the transformer is rated to operate. (b) Testing. Testing for each covered product or...

  8. Engineering a Multi-Purpose Test Collection for Web Retrieval Experiments.

    ERIC Educational Resources Information Center

    Bailey, Peter; Craswell, Nick; Hawking, David

    2003-01-01

    Describes a test collection that was developed as a multi-purpose testbed for experiments on the Web in distributed information retrieval, hyperlink algorithms, and conventional ad hoc retrieval. Discusses inter-server connectivity, integrity of server holdings, inclusion of documents related to a wide spread of likely queries, and distribution of…

  9. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    NASA Astrophysics Data System (ADS)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  10. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    PubMed

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  11. [Development of a microenvironment test chamber for airborne microbe research].

    PubMed

    Zhan, Ningbo; Chen, Feng; Du, Yaohua; Cheng, Zhi; Li, Chenyu; Wu, Jinlong; Wu, Taihu

    2017-10-01

    One of the most important environmental cleanliness indicators is airborne microbe. However, the particularity of clean operating environment and controlled experimental environment often leads to the limitation of the airborne microbe research. This paper designed and implemented a microenvironment test chamber for airborne microbe research in normal test conditions. Numerical simulation by Fluent showed that airborne microbes were evenly dispersed in the upper part of test chamber, and had a bottom-up concentration growth distribution. According to the simulation results, the verification experiment was carried out by selecting 5 sampling points in different space positions in the test chamber. Experimental results showed that average particle concentrations of all sampling points reached 10 7 counts/m 3 after 5 minutes' distributing of Staphylococcus aureus , and all sampling points showed the accordant mapping of concentration distribution. The concentration of airborne microbe in the upper chamber was slightly higher than that in the middle chamber, and that was also slightly higher than that in the bottom chamber. It is consistent with the results of numerical simulation, and it proves that the system can be well used for airborne microbe research.

  12. Fundamentals of Research Data and Variables: The Devil Is in the Details.

    PubMed

    Vetter, Thomas R

    2017-10-01

    Designing, conducting, analyzing, reporting, and interpreting the findings of a research study require an understanding of the types and characteristics of data and variables. Descriptive statistics are typically used simply to calculate, describe, and summarize the collected research data in a logical, meaningful, and efficient way. Inferential statistics allow researchers to make a valid estimate of the association between an intervention and the treatment effect in a specific population, based upon their randomly collected, representative sample data. Categorical data can be either dichotomous or polytomous. Dichotomous data have only 2 categories, and thus are considered binary. Polytomous data have more than 2 categories. Unlike dichotomous and polytomous data, ordinal data are rank ordered, typically based on a numerical scale that is comprised of a small set of discrete classes or integers. Continuous data are measured on a continuum and can have any numeric value over this continuous range. Continuous data can be meaningfully divided into smaller and smaller or finer and finer increments, depending upon the precision of the measurement instrument. Interval data are a form of continuous data in which equal intervals represent equal differences in the property being measured. Ratio data are another form of continuous data, which have the same properties as interval data, plus a true definition of an absolute zero point, and the ratios of the values on the measurement scale make sense. The normal (Gaussian) distribution ("bell-shaped curve") is of the most common statistical distributions. Many applied inferential statistical tests are predicated on the assumption that the analyzed data follow a normal distribution. The histogram and the Q-Q plot are 2 graphical methods to assess if a set of data have a normal distribution (display "normality"). The Shapiro-Wilk test and the Kolmogorov-Smirnov test are 2 well-known and historically widely applied quantitative methods to assess for data normality. Parametric statistical tests make certain assumptions about the characteristics and/or parameters of the underlying population distribution upon which the test is based, whereas nonparametric tests make fewer or less rigorous assumptions. If the normality test concludes that the study data deviate significantly from a Gaussian distribution, rather than applying a less robust nonparametric test, the problem can potentially be remedied by judiciously and openly: (1) performing a data transformation of all the data values; or (2) eliminating any obvious data outlier(s).

  13. The malaria testing and treatment market in Kinshasa, Democratic Republic of the Congo, 2013.

    PubMed

    Mpanya, Godéfroid; Tshefu, Antoinette; Likwela, Joris Losimba

    2017-02-28

    The Democratic Republic of Congo (DRC) is one of the two most leading contributors to the global burden of disease due to malaria. This paper describes the malaria testing and treatment market in the nation's capital province of Kinshasa, including availability of malaria testing and treatment and relative anti-malarial market share for the public and private sector. A malaria medicine outlet survey was conducted in Kinshasa province in 2013. Stratified multi-staged sampling was used to select areas for the survey. Within sampled areas, all outlets with the potential to sell or distribute anti-malarials in the public and private sector were screened for eligibility. Among outlets with anti-malarials or malaria rapid diagnostic tests (RDT) in stock, a full audit of all available products was conducted. Information collected included product information (e.g. active ingredients, brand name), amount reportedly distributed to patients in the past week, and retail price. In total, 3364 outlets were screened for inclusion across Kinshasa and 1118 outlets were eligible for the study. Among all screened outlets in the private sector only about one in ten (12.1%) were stocking quality-assured Artemisinin-based Combination Therapy (ACT) medicines. Among all screened public sector facilities, 24.5% had both confirmatory testing and quality-assured ACT available, and 20.2% had sulfadoxine-pyrimethamine (SP) available for intermittent preventive therapy during pregnancy (IPTp). The private sector distributed the majority of anti-malarials in Kinshasa (96.7%), typically through drug stores (89.1% of the total anti-malarial market). Non-artemisinin therapies were the most commonly distributed anti-malarial (50.1% of the total market), followed by non quality-assured ACT medicines (38.5%). The median price of an adult quality-assured ACT was $6.59, and more expensive than non quality-assured ACT ($3.71) and SP ($0.44). Confirmatory testing was largely not available in the private sector (1.1%). While the vast majority of anti-malarial medicines distributed to patients in Kinshasa province are sold within the private sector, availability of malaria testing and appropriate treatment for malaria is alarmingly low. There is a critical need to improve access to confirmatory testing and quality-assured ACT in the private sector. Widespread availability and distribution of non quality-assured ACT and non-artemisinin therapies must be addressed to ensure effective malaria case management.

  14. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  15. The Skillings-Mack test (Friedman test when there are missing data).

    PubMed

    Chatfield, Mark; Mander, Adrian

    2009-04-01

    The Skillings-Mack statistic (Skillings and Mack, 1981, Technometrics 23: 171-177) is a general Friedman-type statistic that can be used in almost any block design with an arbitrary missing-data structure. The missing data can be either missing by design, for example, an incomplete block design, or missing completely at random. The Skillings-Mack test is equivalent to the Friedman test when there are no missing data in a balanced complete block design, and the Skillings-Mack test is equivalent to the test suggested in Durbin (1951, British Journal of Psychology, Statistical Section 4: 85-90) for a balanced incomplete block design. The Friedman test was implemented in Stata by Goldstein (1991, Stata Technical Bulletin 3: 26-27) and further developed in Goldstein (2005, Stata Journal 5: 285). This article introduces the skilmack command, which performs the Skillings-Mack test.The skilmack command is also useful when there are many ties or equal ranks (N.B. the Friedman statistic compared with the chi(2) distribution will give a conservative result), as well as for small samples; appropriate results can be obtained by simulating the distribution of the test statistic under the null hypothesis.

  16. Middleware Trade Study for NASA Domain

    NASA Technical Reports Server (NTRS)

    Bowman, Dan

    2007-01-01

    This presentation presents preliminary results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are: the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL

  17. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  18. Variation in healthcare services for specialist genetic testing and implications for planning genetic services: the example of inherited retinal dystrophy in the English NHS.

    PubMed

    Harrison, Mark; Birch, Stephen; Eden, Martin; Ramsden, Simon; Farragher, Tracey; Payne, Katherine; Hall, Georgina; Black, Graeme Cm

    2015-04-01

    This study aims to identify and quantify the extent of current variation in service provision of a genetic testing service for dominant and X-linked retinal dystrophies in the English National Health Service (NHS). National audit data (all test requests and results (n = 1839) issued between 2003 and 2011) and survey of English regional genetic testing services were used. Age- and gender-adjusted standardised testing rates were calculated using indirect standardisation, and survey responses were transcribed verbatim and data collated and summarised. The cumulative incidence rate of testing in England was 4.5 per 100,000 population for males and 2.6 per 100,000 population for females. The standardised testing rate (STR) varied widely between regions of England, being particularly low in the North-east (STR 0.485), with half as many tests as expected based on the size and demographic distribution of the population and high in the South-east (STR 1.355), with 36 % more tests than expected. Substantial and significantly different rates of testing were found between regional populations. Specific policy mechanisms to promote, monitor and evaluate the regional distribution of access to genetic and genomic testing are required. However, commissioners will require information on the scope and role of genetic services and the population at risk of the conditions for which patients are tested.

  19. The SSM/PMAD automated test bed project

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.

    1991-01-01

    The Space Station Module/Power Management and Distribution (SSM/PMAD) autonomous subsystem project was initiated in 1984. The project's goal has been to design and develop an autonomous, user-supportive PMAD test bed simulating the SSF Hab/Lab module(s). An eighteen kilowatt SSM/PMAD test bed model with a high degree of automated operation has been developed. This advanced automation test bed contains three expert/knowledge based systems that interact with one another and with other more conventional software residing in up to eight distributed 386-based microcomputers to perform the necessary tasks of real-time and near real-time load scheduling, dynamic load prioritizing, and fault detection, isolation, and recovery (FDIR).

  20. Description and calibration of the Langley unitary plan wind tunnel

    NASA Technical Reports Server (NTRS)

    Jackson, C. M., Jr.; Corlett, W. A.; Monta, W. J.

    1981-01-01

    The two test sections of the Langley Unitary Plan Wind Tunnel were calibrated over the operating Mach number range from 1.47 to 4.63. The results of the calibration are presented along with a a description of the facility and its operational capability. The calibrations include Mach number and flow angularity distributions in both test sections at selected Mach numbers and tunnel stagnation pressures. Calibration data are also presented on turbulence, test-section boundary layer characteristics, moisture effects, blockage, and stagnation-temperature distributions. The facility is described in detail including dimensions and capacities where appropriate, and example of special test capabilities are presented. The operating parameters are fully defined and the power consumption characteristics are discussed.

  1. Operating condition and geometry effects on low-frequency afterburner combustion instability in a turbofan at altitude

    NASA Technical Reports Server (NTRS)

    Cullom, R. R.; Johnsen, R. L.

    1979-01-01

    Three afterburner configurations were tested in a low-bypass-ratio turbofan engine to determine the effect of various fuel distributions, inlet conditions, flameholder geometry, and fuel injection location on combustion instability. Tests were conducted at simulated flight conditions of Mach 0.75 and 1.3 at altitudes from 11,580 to 14,020 m (38,000 to 46,000 ft). In these tests combustion instability with frequency from 28 to 90 Hz and peak-to-peak pressure amplitude up to 46.5 percent of the afterburner inlet total pressure level was encountered. Combustion instability was suppressed in these tests by varying the fuel distribution in the afterburner.

  2. 16 CFR 1212.14 - Qualification testing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Qualification testing. 1212.14 Section 1212.14 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS...) Testing. Before any manufacturer or importer of multi-purpose lighters distributes multi-purpose lighters...

  3. Space power distribution system technology. Volume 3: Test facility design

    NASA Technical Reports Server (NTRS)

    Decker, D. K.; Cannady, M. D.; Cassinelli, J. E.; Farber, B. F.; Lurie, C.; Fleck, G. W.; Lepisto, J. W.; Messner, A.; Ritterman, P. F.

    1983-01-01

    The AMPS test facility is a major tool in the attainment of more economical space power. The ultimate goals of the test facility, its primary functional requirements and conceptual design, and the major equipment it contains are discussed.

  4. Batch Tests To Determine Activity Distribution and Kinetic Parameters for Acetate Utilization in Expanded-Bed Anaerobic Reactors

    PubMed Central

    Fox, Peter; Suidan, Makram T.

    1990-01-01

    Batch tests to measure maximum acetate utilization rates were used to determine the distribution of acetate utilizers in expanded-bed sand and expanded-bed granular activated carbon (GAC) reactors. The reactors were fed a mixture of acetate and 3-ethylphenol, and they contained the same predominant aceticlastic methanogen, Methanothrix sp. Batch tests were performed both on the entire reactor contents and with media removed from the reactors. Results indicated that activity was evenly distributed within the GAC reactors, whereas in the sand reactor a sludge blanket on top of the sand bed contained approximately 50% of the activity. The Monod half-velocity constant (Ks) for the acetate-utilizing methanogens in two expanded-bed GAC reactors was searched for by combining steady-state results with batch test data. All parameters necessary to develop a model with Monod kinetics were experimentally determined except for Ks. However, Ks was a function of the effluent 3-ethylphenol concentration, and batch test results demonstrated that maximum acetate utilization rates were not a function of the effluent 3-ethylphenol concentration. Addition of a competitive inhibition term into the Monod expression predicted the dependence of Ks on the effluent 3-ethylphenol concentration. A two-parameter search determined a Ks of 8.99 mg of acetate per liter and a Ki of 2.41 mg of 3-ethylphenol per liter. Model predictions were in agreement with experimental observations for all effluent 3-ethylphenol concentrations. Batch tests measured the activity for a specific substrate and determined the distribution of activity in the reactor. The use of steady-state data in conjunction with batch test results reduced the number of unknown kinetic parameters and thereby reduced the uncertainty in the results and the assumptions made. PMID:16348175

  5. Batch tests to determine activity distribution and kinetic parameters for acetate utilization in expanded-bed anaerobic reactors.

    PubMed

    Fox, P; Suidan, M T

    1990-04-01

    Batch tests to measure maximum acetate utilization rates were used to determine the distribution of acetate utilizers in expanded-bed sand and expanded-bed granular activated carbon (GAC) reactors. The reactors were fed a mixture of acetate and 3-ethylphenol, and they contained the same predominant aceticlastic methanogen, Methanothrix sp. Batch tests were performed both on the entire reactor contents and with media removed from the reactors. Results indicated that activity was evenly distributed within the GAC reactors, whereas in the sand reactor a sludge blanket on top of the sand bed contained approximately 50% of the activity. The Monod half-velocity constant (K(s)) for the acetate-utilizing methanogens in two expanded-bed GAC reactors was searched for by combining steady-state results with batch test data. All parameters necessary to develop a model with Monod kinetics were experimentally determined except for K(s). However, K(s) was a function of the effluent 3-ethylphenol concentration, and batch test results demonstrated that maximum acetate utilization rates were not a function of the effluent 3-ethylphenol concentration. Addition of a competitive inhibition term into the Monod expression predicted the dependence of K(s) on the effluent 3-ethylphenol concentration. A two-parameter search determined a K(s) of 8.99 mg of acetate per liter and a K(i) of 2.41 mg of 3-ethylphenol per liter. Model predictions were in agreement with experimental observations for all effluent 3-ethylphenol concentrations. Batch tests measured the activity for a specific substrate and determined the distribution of activity in the reactor. The use of steady-state data in conjunction with batch test results reduced the number of unknown kinetic parameters and thereby reduced the uncertainty in the results and the assumptions made.

  6. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    NASA Astrophysics Data System (ADS)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  7. Electrical Subsystems Flight Test Handbook

    DTIC Science & Technology

    1984-01-01

    distribution of this handbook to the public at large, or by DDC to the National Technical Information Service (NTIS). At NTIS, it will be available to...Abnormal Mode 58 Emergency Mode 61 Instrumentation 62 Test Information Sheets 62 Integration with Flight Test Program 62 DATA MEASUREMENT, ANALYS IS...AND EVALUATION 65 REFERENCES 73 -APPENDIX A - EXAMPLE OF TEST INFORMATION SHEET 75 APPENDIX B - EXAMPLE OF TEST PLAN SAFETY REVIEW 85 APPENDIX C

  8. Liquid Rocket Engine Testing

    DTIC Science & Technology

    2016-10-21

    Briefing Charts 3. DATES COVERED (From - To) 17 October 2016 – 26 October 2016 4. TITLE AND SUBTITLE Liquid Rocket Engine Testing 5a. CONTRACT NUMBER...298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 Liquid Rocket Engine Testing SFTE Symposium 21 October 2016 Jake Robertson, Capt USAF AFRL...Distribution Unlimited. PA Clearance 16493 Liquid Rocket Engine Testing • Engines and their components are extensively static-tested in development • This

  9. When Does Testing Enhance Retention? A Distribution-Based Interpretation of Retrieval as a Memory Modifier

    ERIC Educational Resources Information Center

    Halamish, Vered; Bjork, Robert A.

    2011-01-01

    Tests, as learning events, can enhance subsequent recall more than do additional study opportunities, even without feedback. Such advantages of testing tend to appear, however, only at long retention intervals and/or when criterion tests stress recall, rather than recognition, processes. We propose that the interaction of the benefits of testing…

  10. 40 CFR 86.230-94 - Test sequence: general requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... testing. (2) The ambient temperature reported shall be a simple average of the test cell temperatures... cell temperature shall be 20 °F±3 °F (−7 °C±1.7 °C) when measured in accordance with paragraph (e)(2... approximately level during all phases of the test sequence to prevent abnormal fuel distribution. (e) Engine...

  11. Evaluation of distributed gas cooling of pressurized PAFC for utility power generation

    NASA Technical Reports Server (NTRS)

    Farooque, M.; Hooper, M.; Maru, H.

    1981-01-01

    A proof-of-concept test for a gas-cooled pressurized phosphoric acid fuel cell is described. After initial feasibility studies in short stacks, two 10 kW stacks are tested. Progress includes: (1) completion of design of the test stations with a recirculating gas cooling loop; (2) atmospheric testing of the baseline stack.

  12. Pressure distributions obtained on a 0.10-scale model of the Space Shuttle Orbiter's forebody in the Ames Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Siemers, P. M., III; Henry, M. W.

    1986-01-01

    Pressure distribution test data obtained on a 0.10-scale model of the forward fuselage of the Space Shuttle Orbiter are presented without analysis. The tests were completed in the Ames Unitary Wind Tunnel (UPWT). The UPWT tests were conducted in two different test sections operating in the continuous mode, the 8 x 7 feet and 9 x 7 feet test sections. Each test section has its own Mach number range, 1.6 to 2.5 and 2.5 to 3.5 for the 9 x 7 feet and 8 x 7 feet test section, respectively. The test Reynolds number ranged from 1.6 to 2.5 x 10 to the 6th power ft and 0.6 to 2.0 x 10 to the 6th power ft, respectively. The tests were conducted in support of the development of the Shuttle Entry Air Data System (SEADS). In addition to modeling the 20 SEADS orifices, the wind-tunnel model was also instrumented with orifices to match Development Flight Instrumentation (DFI) port locations that existed on the Space Shuttle Columbia (OV-102) during the Orbiter Flight test program. This DFI simulation has provided a means for comparisons between reentry flight pressure data and wind-tunnel and computational data.

  13. Interdigital Capacitance Local Non-Destructive Examination of Nuclear Power Plant Cable for Aging Management Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glass, Samuel W.; Fifield, Leonard S.; Bowler, Nicola

    This Pacific Northwest National Laboratory milestone report describes progress to date on the investigation of non-destructive test methods focusing on local cable insulation and jacket testing using an interdigital capacitance (IDC) approach. Earlier studies have assessed a number of non-destructive examination (NDE) methods for bulk, distributed, and local cable tests. A typical test strategy is to perform bulk assessments of the cable response using dielectric spectroscopy, Tan , or partial discharge followed by distributed tests like time domain reflectometry or frequency domain reflectometry to identify the most likely defect location followed by a local test that can include visual inspection,more » indenter modulus tests, or Fourier Transform Infrared Spectroscopy (FTIR) or Near Infrared Spectroscopy FTIR (FTNIR). If a cable is covered with an overlaying jacket, the jacket’s condition is likely to be more severely degraded than the underlying insulation. None of the above local test approaches can be used to evaluate insulation beneath a cable jacket. Since the jacket’s function is neither structural nor electrical, a degraded jacket may not have any significance regarding the cable’s performance or suitability for service. IDC measurements offer a promising alternative or complement to these local test approaches including the possibility to test insulation beneath an overlaying jacket.« less

  14. An Ignition Torch Based on Photoignition of Carbon Nanotubes at Elevated Pressure (Briefing Charts)

    DTIC Science & Technology

    2016-01-04

    Ignition Capsule A 10 mg low pressure ignition torch as it ignites a fuel spray We use PITCH to ignite subscale test rockets at 130 K and ~35 atm (~500...distribution is unlimited High Pressure PITCH Applied to a H2/O2 Subscale Rocket Injector Top: a high-pressure chamber for test of subscale rocket injector...to a high-pressure test combustion chamber via a 20 cm extension tube (OD=6 mm) Click >>> 9 DISTRIBUTION STATEMENT A. Approved for public release

  15. A Study of the Application of the Lognormal and Gamma Distributions to Corrective Maintenance Repair Time Data.

    DTIC Science & Technology

    1982-10-01

    K-S A R A j 1 10 23 R 3 8 11 16 18 For the lognormal methods the test methods sometimes give different results. The K-S test and the chi-square...significant difference among the three test methods . A previous study has been done using 24 data sets of electronic systems and equipments, using only the W...are suitable descriptors for corrective maintenance repair times, and to estimate the difference caused in assuming an exponential distribution for

  16. Impact Testing of Explosives and Propellants

    DTIC Science & Technology

    1992-06-01

    in order to better understand the test results. The physical behavior of tile drop weight impact test can be modeled on a computer with all of the...SPRING ELEMENTS OF SIMPLIFIED IMPACT MACHINE A-4 NSV’CDDITR-92/280 4 j DISTRIBUTION Copies C~e ATTN ONR 1132P (R MILLER ) 1 ATTN 6-A-145 1 ONT2OT(L V...LIBRARY OF CONGRESS (J W FORBES) 1 WASHINGTON DC 20540 (R11 GUIRGUIS ) 1 (P K GUSTAVSON) I INTERNAL DISTRIBUTION (R N HAY) ! E231 2 (It D JONES) 1 E232 3 (K

  17. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  18. GPR-Based Water Leak Models in Water Distribution Systems

    PubMed Central

    Ayala-Cabrera, David; Herrera, Manuel; Izquierdo, Joaquín; Ocaña-Levario, Silvia J.; Pérez-García, Rafael

    2013-01-01

    This paper addresses the problem of leakage in water distribution systems through the use of ground penetrating radar (GPR) as a nondestructive method. Laboratory tests are performed to extract features of water leakage from the obtained GPR images. Moreover, a test in a real-world urban system under real conditions is performed. Feature extraction is performed by interpreting GPR images with the support of a pre-processing methodology based on an appropriate combination of statistical methods and multi-agent systems. The results of these tests are presented, interpreted, analyzed and discussed in this paper.

  19. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  20. Modeled ground water age distributions

    USGS Publications Warehouse

    Woolfenden, Linda R.; Ginn, Timothy R.

    2009-01-01

    The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.

  1. A telemedicine model for integrating point-of-care testing into a distributed health-care environment.

    PubMed

    Villalar, J L; Arredondo, M T; Meneu, T; Traver, V; Cabrera, M F; Guillen, S; Del Pozo, F

    2002-01-01

    Centralized testing demands costly laboratories, which are inefficient and may provide poor services. Recent advances make it feasible to move clinical testing nearer to patients and the requesting physicians, thus reducing the time to treatment. Internet technologies can be used to create a virtual laboratory information system in a distributed health-care environment. This allows clinical testing to be transferred to a cooperative scheme of several point-of-care testing (POCT) nodes. Two pilot virtual laboratories were established, one in Italy (AUSL Modena) and one in Greece (Athens Medical Centre). They were constructed on a three-layer model to allow both technical and clinical verification. Different POCT devices were connected. The pilot sites produced good preliminary results in relation to user acceptance, efficiency, convenience and costs. Decentralized laboratories can be expected to become cost-effective.

  2. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  3. Aerodynamic characteristics of the National Launch System (NLS) 1 1/2 stage launch vehicle

    NASA Technical Reports Server (NTRS)

    Springer, A. M.; Pokora, D. C.

    1994-01-01

    The National Aeronautics and Space Administration (NASA) is studying ways of assuring more reliable and cost effective means to space. One launch system studied was the NLS which included the l l/2 stage vehicle. This document encompasses the aerodynamic characteristics of the 1 l/2 stage vehicle. To support the detailed configuration definition two wind tunnel tests were conducted in the NASA Marshall Space Flight Center's 14x14-Inch Trisonic Wind Tunnel during 1992. The tests were a static stability and a pressure test, each utilizing 0.004 scale models. The static stability test resulted in the forces and moments acting on the vehicle. The aerodynamics for the reference configuration with and without feedlines and an evaluation of three proposed engine shroud configurations were also determined. The pressure test resulted in pressure distributions over the reference vehicle with and without feedlines including the reference engine shrouds. These pressure distributions were integrated and balanced to the static stability coefficients resulting in distributed aerodynamic loads on the vehicle. The wind tunnel tests covered a Mach range of 0.60 to 4.96. These ascent flight aerodynamic characteristics provide the basis for trajectory and performance analysis, loads determination, and guidance and control evaluation.

  4. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    ERIC Educational Resources Information Center

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  5. Using Response Times for Item Selection in Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2008-01-01

    Response times on items can be used to improve item selection in adaptive testing provided that a probabilistic model for their distribution is available. In this research, the author used a hierarchical modeling framework with separate first-level models for the responses and response times and a second-level model for the distribution of the…

  6. THE DISTRIBUTION OF CHLORPYRIFOS FOLLOWING A CRACK AND CREVICE TYPE APPLICATION IN THE U.S. EPA INDOOR AIR QUALITY TEST HOUSE

    EPA Science Inventory

    A study was conducted in the U.S. EPA Indoor Air Quality Test House to determine the spatial and temporal distribution of chlorpyrifos following a professional crack and crevice application in the kitchen. Following the application, measurements were made in the kitchen, den a...

  7. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  8. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  9. Testing the Race Model Inequality in Redundant Stimuli with Variable Onset Asynchrony

    ERIC Educational Resources Information Center

    Gondan, Matthias

    2009-01-01

    In speeded response tasks with redundant signals, parallel processing of the signals is tested by the race model inequality. This inequality states that given a race of two signals, the cumulative distribution of response times for redundant stimuli never exceeds the sum of the cumulative distributions of response times for the single-modality…

  10. Space shuttle: Heat transfer rate distributions on McDonnell-Douglas delta wing orbiter determined by phase-change paint technique for nominal Mach number of 8

    NASA Technical Reports Server (NTRS)

    Matthews, R. K.; Martindale, W. R.; Warmbrod, J. D.

    1972-01-01

    The results are reported of the phase-change paint tests conducted at Mach 8, to determine the aerodynamic heat transfer distributions on the McDonnell Douglas delta wing orbiter. Model details, test conditions, and reduced heat transfer data are presented.

  11. Pseudo Bayes Estimates for Test Score Distributions and Chained Equipercentile Equating. Research Report. ETS RR-09-47

    ERIC Educational Resources Information Center

    Moses, Tim; Oh, Hyeonjoo J.

    2009-01-01

    Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…

  12. Preliminary results from the White Sands Missile Range sonic boom propagation experiment

    NASA Technical Reports Server (NTRS)

    Willshire, William L., Jr.; Devilbiss, David W.

    1992-01-01

    Sonic boom bow shock amplitude and rise time statistics from a recent sonic boom propagation experiment are presented. Distributions of bow shock overpressure and rise time measured under different atmospheric turbulence conditions for the same test aircraft are quite different. The peak overpressure distributions are skewed positively, indicating a tendency for positive deviations from the mean to be larger than negative deviations. Standard deviations of overpressure distributions measured under moderate turbulence were 40 percent larger than those measured under low turbulence. As turbulence increased, the difference between the median and the mean increased, indicating increased positive overpressure deviations. The effect of turbulence was more readily seen in the rise time distributions. Under moderate turbulence conditions, the rise time distribution means were larger by a factor of 4 and the standard deviations were larger by a factor of 3 from the low turbulence values. These distribution changes resulted in a transition from a peaked appearance of the rise time distribution for the morning to a flattened appearance for the afternoon rise time distributions. The sonic boom propagation experiment consisted of flying three types of aircraft supersonically over a ground-based microphone array with concurrent measurements of turbulence and other meteorological data. The test aircraft were a T-38, an F-15, and an F-111, and they were flown at speeds of Mach 1.2 to 1.3, 30,000 feet above a 16 element, linear microphone array with an inter-element spacing of 200 ft. In two weeks of testing, 57 supersonic passes of the test aircraft were flown from early morning to late afternoon.

  13. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    PubMed

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  14. Voltage stress effects on microcircuit accelerated life test failure rates

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1976-01-01

    The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.

  15. Estimation of stress distribution in ferromagnetic tensile specimens using low cost eddy current stress measurement system and BP neural network.

    PubMed

    Li, Jianwei; Zhang, Weimin; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi

    2017-01-01

    Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing.

  16. Estimation of stress distribution in ferromagnetic tensile specimens using low cost eddy current stress measurement system and BP neural network

    PubMed Central

    Li, Jianwei; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi

    2017-01-01

    Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing. PMID:29145500

  17. A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies

    PubMed Central

    Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.

    2018-01-01

    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759

  18. Statistics, Handle with Care: Detecting Multiple Model Components with the Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Protassov, Rostislav; van Dyk, David A.; Connors, Alanna; Kashyap, Vinay L.; Siemiginowska, Aneta

    2002-05-01

    The likelihood ratio test (LRT) and the related F-test, popularized in astrophysics by Eadie and coworkers in 1971, Bevington in 1969, Lampton, Margon, & Bowyer, in 1976, Cash in 1979, and Avni in 1978, do not (even asymptotically) adhere to their nominal χ2 and F-distributions in many statistical tests common in astrophysics, thereby casting many marginal line or source detections and nondetections into doubt. Although the above authors illustrate the many legitimate uses of these statistics, in some important cases it can be impossible to compute the correct false positive rate. For example, it has become common practice to use the LRT or the F-test to detect a line in a spectral model or a source above background despite the lack of certain required regularity conditions. (These applications were not originally suggested by Cash or by Bevington.) In these and other settings that involve testing a hypothesis that is on the boundary of the parameter space, contrary to common practice, the nominal χ2 distribution for the LRT or the F-distribution for the F-test should not be used. In this paper, we characterize an important class of problems in which the LRT and the F-test fail and illustrate this nonstandard behavior. We briefly sketch several possible acceptable alternatives, focusing on Bayesian posterior predictive probability values. We present this method in some detail since it is a simple, robust, and intuitive approach. This alternative method is illustrated using the gamma-ray burst of 1997 May 8 (GRB 970508) to investigate the presence of an Fe K emission line during the initial phase of the observation. There are many legitimate uses of the LRT and the F-test in astrophysics, and even when these tests are inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). Nevertheless, there are numerous cases of the inappropriate use of the LRT and similar tests in the literature, bringing substantive scientific results into question.

  19. Effect of afterbody geometry on aerodynamic characteristics of isolated nonaxisymmetric afterbodies at transonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Bangert, Linda S.; Carson, George T., Jr.

    1992-01-01

    A parametric study was conducted in the Langley 16-Foot Transonic Tunnel on an isolated nonaxisymmetic fuselage model that simulates a twin-engine fighter. The effects of aft-end closure distribution (top/bottom) nozzle-flap boattail angle versus nozzle-sidewall boattail angle) and afterbody and nozzle corner treatment (sharp or radius) were investigated. Four different closure distributions with three different corner radii were tested. Tests were conducted over a range of Mach numbers from 0.40 to 1.25 and over a range of angles of attack from -3 to 9 degrees. Solid plume simulators were used to simulate the jet exhaust. For a given closure distribution in the range of Mach numbers tested, the sharp-corner nozzles generally had the highest drag, and the 2-in. corner-radius nozzles generally had the lowest drag. The effect of closure distribution on afterbody drag was highly dependent on configuration and flight condition.

  20. Design and Development of a 200-kW Turbo-Electric Distributed Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Papathakis, Kurt V.; Kloesel, Kurt J.; Lin, Yohan; Clarke, Sean; Ediger, Jacob J.; Ginn, Starr

    2016-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC) (Edwards, California) is developing a Hybrid-Electric Integrated Systems Testbed (HEIST) Testbed as part of the HEIST Project, to study power management and transition complexities, modular architectures, and flight control laws for turbo-electric distributed propulsion technologies using representative hardware and piloted simulations. Capabilities are being developed to assess the flight readiness of hybrid electric and distributed electric vehicle architectures. Additionally, NASA will leverage experience gained and assets developed from HEIST to assist in flight-test proposal development, flight-test vehicle design, and evaluation of hybrid electric and distributed electric concept vehicles for flight safety. The HEIST test equipment will include three trailers supporting a distributed electric propulsion wing, a battery system and turbogenerator, dynamometers, and supporting power and communication infrastructure, all connected to the AFRC Core simulation. Plans call for 18 high performance electric motors that will be powered by batteries and the turbogenerator, and commanded by a piloted simulation. Flight control algorithms will be developed on the turbo-electric distributed propulsion system.

  1. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL wasmore » to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i.e., Newtonian or non-Newtonian). The most important properties for testing with Newtonian slurries are the Archimedes number distribution and the particle concentration. For some test objectives, the shear strength is important. In the testing to collect data for CFD V and V and CFD comparison, the liquid density and liquid viscosity are important. In the high temperature testing, the liquid density and liquid viscosity are important. The Archimedes number distribution combines effects of particle size distribution, solid-liquid density difference, and kinematic viscosity. The most important properties for testing with non-Newtonian slurries are the slurry yield stress, the slurry consistency, and the shear strength. The solid-liquid density difference and the particle size are also important. It is also important to match multiple properties within the same simulant to achieve behavior representative of the waste. Other properties such as particle shape, concentration, surface charge, and size distribution breadth, as well as slurry cohesiveness and adhesiveness, liquid pH and ionic strength also influence the simulant properties either directly or through other physical properties such as yield stress.« less

  2. Risk analysis in cohort studies with heterogeneous strata. A global chi2-test for dose-response relationship, generalizing the Mantel-Haenszel procedure.

    PubMed

    Ahlborn, W; Tuz, H J; Uberla, K

    1990-03-01

    In cohort studies the Mantel-Haenszel estimator ORMH is computed from sample data and is used as a point estimator of relative risk. Test-based confidence intervals are estimated with the help of the asymptotic chi-squared distributed MH-statistic chi 2MHS. The Mantel-extension-chi-squared is used as a test statistic for a dose-response relationship. Both test statistics--the Mantel-Haenszel-chi as well as the Mantel-extension-chi--assume homogeneity of risk across strata, which is rarely present. Also an extended nonparametric statistic, proposed by Terpstra, which is based on the Mann-Whitney-statistics assumes homogeneity of risk across strata. We have earlier defined four risk measures RRkj (k = 1,2,...,4) in the population and considered their estimates and the corresponding asymptotic distributions. In order to overcome the homogeneity assumption we use the delta-method to get "test-based" confidence intervals. Because the four risk measures RRkj are presented as functions of four weights gik we give, consequently, the asymptotic variances of these risk estimators also as functions of the weights gik in a closed form. Approximations to these variances are given. For testing a dose-response relationship we propose a new class of chi 2(1)-distributed global measures Gk and the corresponding global chi 2-test. In contrast to the Mantel-extension-chi homogeneity of risk across strata must not be assumed. These global test statistics are of the Wald type for composite hypotheses.(ABSTRACT TRUNCATED AT 250 WORDS)

  3. Hypervelocity impact testing of the Space Station utility distribution system carrier

    NASA Technical Reports Server (NTRS)

    Lazaroff, Scott

    1993-01-01

    A two-phase, joint JSC and McDonnell Douglas Aerospace-Huntington Beach hypervelocity impact (HVI) test program was initiated to develop an improved understanding of how meteoroid and orbital debris (M/OD) impacts affect the Space Station Freedom (SSF) avionic and fluid lines routed in the Utility Distribution System (UDS) carrier. This report documents the first phase of the test program which covers nonpowered avionic line segment and pressurized fluid line segment HVI testing. From these tests, a better estimation of avionic line failures is approximately 15 failures per year and could very well drop to around 1 or 2 avionic line failures per year (depending upon the results of the second phase testing of the powered avionic line at White Sands). For the fluid lines, the initial McDonnell Douglas analysis calculated 1 to 2 line failures over a 30 year period. The data obtained from these tests indicate the number of predicted fluid line failures increased slightly to as many as 3 in the first 10 years and up to 15 for the entire 30 year life of SSF.

  4. Scramjet Tests in a Shock Tunnel at Flight Mach 7, 10, and 15 Conditions

    NASA Technical Reports Server (NTRS)

    Rogers, R. C.; Shih, A. T.; Tsai, C.-Y.; Foelsche, R. O.

    2001-01-01

    Tests of the Hyper-X scramjet engine flowpath have been conducted in the HYPULSE shock tunnel at conditions duplicating the stagnation enthalpy at flight Mach 7, 10, and 15. For the tests at Mach 7 and 10 HYPULSE was operated as a reflected-shock tunnel; at the Mach 15 condition, HYPULSE was operated as a shock-expansion tunnel. The test conditions matched the stagnation enthalpy of a scramjet engine on an aerospace vehicle accelerating through the atmosphere along a 1000 psf dynamic pressure trajectory. Test parameter variation included fuel equivalence ratios from lean (0.8) to rich (1.5+); fuel composition from pure hydrogen to mixtures of 2% and 5% silane in hydrogen by volume; and inflow pressure and Mach number made by changing the scramjet model mounting angle in the HYPULSE test chamber. Data sources were wall pressures and heat flux distributions and schlieren and fuel plume imaging in the combustor/nozzle sections. Data are presented for calibration of the facility nozzles and the scramjet engine model. Comparisons of pressure distributions and flowpath streamtube performance estimates are made for the three Mach numbers tested.

  5. Using technology to support HIV self-testing among MSM.

    PubMed

    LeGrand, Sara; Muessig, Kathryn E; Horvath, Keith J; Rosengren, Anna L; Hightow-Weidman, Lisa B

    2017-09-01

    Technology-based HIV self-testing (HST) interventions have the potential to improve access to HIV testing among gay, bisexual, and other MSM, as well as address concerns about HST use, including challenges with linkage to appropriate follow-up services. This review examines studies that use technology-based platforms to increase or improve the experience of HST among MSM. Seven published studies and eight funded studies were included in this review. Comprehensive prevention interventions with free HST kit distribution and interventions that provide free HST kits and support the HST process address a greater number of barriers (e.g., access, correct use of testing kits, and correct interpretation of results) than studies that only distribute free HST kits through technology-based platforms. By addressing HIV-testing barriers and specific HST concerns, these interventions address a critical need to improve first time and repeat testing rates among MSM. Additional research is needed to determine the efficacy of recent formative HST interventions. If proven efficacious, scale-up of these strategies have the potential to increase HIV testing among MSM via expanded HST uptake.

  6. A test for selection employing quantitative trait locus and mutation accumulation data.

    PubMed

    Rice, Daniel P; Townsend, Jeffrey P

    2012-04-01

    Evolutionary biologists attribute much of the phenotypic diversity observed in nature to the action of natural selection. However, for many phenotypic traits, especially quantitative phenotypic traits, it has been challenging to test for the historical action of selection. An important challenge for biologists studying quantitative traits, therefore, is to distinguish between traits that have evolved under the influence of strong selection and those that have evolved neutrally. Most existing tests for selection employ molecular data, but selection also leaves a mark on the genetic architecture underlying a trait. In particular, the distribution of quantitative trait locus (QTL) effect sizes and the distribution of mutational effects together provide information regarding the history of selection. Despite the increasing availability of QTL and mutation accumulation data, such data have not yet been effectively exploited for this purpose. We present a model of the evolution of QTL and employ it to formulate a test for historical selection. To provide a baseline for neutral evolution of the trait, we estimate the distribution of mutational effects from mutation accumulation experiments. We then apply a maximum-likelihood-based method of inference to estimate the range of selection strengths under which such a distribution of mutations could generate the observed QTL. Our test thus represents the first integration of population genetic theory and QTL data to measure the historical influence of selection.

  7. Score Distributions of the Balance Outcome Measure for Elder Rehabilitation (BOOMER) in Community-Dwelling Older Adults With Vertebral Fracture.

    PubMed

    Brown, Zachary M; Gibbs, Jenna C; Adachi, Jonathan D; Ashe, Maureen C; Hill, Keith D; Kendler, David L; Khan, Aliya; Papaioannou, Alexandra; Prasad, Sadhana; Wark, John D; Giangregorio, Lora M

    2017-11-28

    We sought to evaluate the Balance Outcome Measure for Elder Rehabilitation (BOOMER) in community-dwelling women 65 years and older with vertebral fracture and to describe score distributions and potential ceiling and floor effects. This was a secondary data analysis of baseline data from the Build Better Bones with Exercise randomized controlled trial using the BOOMER. A total of 141 women with osteoporosis and radiographically confirmed vertebral fracture were included. Concurrent validity and internal consistency were assessed in comparison to the Short Physical Performance Battery (SPPB). Normality and ceiling/floor effects of total BOOMER scores and component test items were also assessed. Exploratory analyses of assistive aid use and falls history were performed. Tests for concurrent validity demonstrated moderate correlation between total BOOMER and SPPB scores. The BOOMER component tests showed modest internal consistency. Substantial ceiling effect and nonnormal score distributions were present among overall sample and those not using assistive aids for total BOOMER scores, although scores were normally distributed for those using assistive aids. The static standing with eyes closed test demonstrated the greatest ceiling effects of the component tests, with 92% of participants achieving a maximal score. While the BOOMER compares well with the SPPB in community-dwelling women with vertebral fractures, researchers or clinicians considering using the BOOMER in similar or higher-functioning populations should be aware of the potential for ceiling effects.

  8. Near-exact distributions for the block equicorrelation and equivariance likelihood ratio test statistic

    NASA Astrophysics Data System (ADS)

    Coelho, Carlos A.; Marques, Filipe J.

    2013-09-01

    In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.

  9. A note on the SG( m) test

    NASA Astrophysics Data System (ADS)

    López, Fernando A.; Matilla-García, Mariano; Mur, Jesús; Páez, Antonio; Ruiz, Manuel

    2016-01-01

    López et al. (Reg Sci Urban Econ 40(2-3):106-115, 2010) introduce a nonparametric test of spatial dependence, called SG( m). The test is claimed to be consistent and asymptotically Chi-square distributed. Elsinger (Reg Sci Urban Econ 43(5):838-840, 2013) raises doubts about the two properties. Using a particular counterexample, he shows that the asymptotic distribution of the SG( m) test may be far from the Chi-square family; the property of consistency is also questioned. In this note, the authors want to clarify the properties of the SG( m) test. We argue that the cause of the conflict is in the specification of the symbolization map. The discrepancies can be solved by adjusting some of the definitions made in the original paper. Moreover, we introduce a permutational bootstrapped version of the SG( m) test, which is powerful and robust to the underlying statistical assumptions. This bootstrapped version may be very useful in an applied context.

  10. Experimental investigation of wall shock cancellation and reduction of wall interference in transonic testing

    NASA Technical Reports Server (NTRS)

    Ferri, A.; Roffe, G.

    1975-01-01

    A series of experiments were performed to evaluate the effectiveness of a three-dimensional land and groove wall geometry and a variable permeability distribution to reduce the interference produced by the porous walls of a supercritical transonic test section. The three-dimensional wall geometry was found to diffuse the pressure perturbations caused by small local mismatches in wall porosity permitting the use of a relatively coarse wall porosity control to reduce or eliminate wall interference effects. The wall porosity distribution required was found to be a sensitive function of Mach number requiring that the Mach number repeatability characteristics of the test apparatus be quite good. The effectiveness of a variable porosity wall is greatest in the upstream region of the test section where the pressure differences across the wall are largest. An effective variable porosity wall in the down stream region of the test section requires the use of a slightly convergent test section geometry.

  11. The Measurement of Pressure Through Tubes in Pressure Distribution Tests

    NASA Technical Reports Server (NTRS)

    Hemke, Paul E

    1928-01-01

    The tests described in this report were made to determine the error caused by using small tubes to connect orifices on the surface of aircraft to central pressure capsules in making pressure distribution tests. Aluminum tubes of 3/16-inch inside diameter were used to determine this error. Lengths from 20 feet to 226 feet and pressures whose maxima varied from 2 inches to 140 inches of water were used. Single-pressure impulses for which the time of rise of pressure from zero to a maximum varied from 0.25 second to 3 seconds were investigated. The results show that the pressure recorded at the capsule on the far end of the tube lags behind the pressure at the orifice end and experiences also a change in magnitude. For the values used in these tests the time lag and pressure change vary principally with the time of rise of pressure from zero to a maximum and the tube length. Curves are constructed showing the time lag and pressure change. Empirical formulas are also given for computing the time lag. Analysis of pressure distribution tests made on airplanes in flight shows that the recorded pressures are slightly higher than the pressures at the orifice and that the time lag is negligible. The apparent increase in pressure is usually within the experimental error, but in the case of the modern pursuit type of airplane the pressure increase may be 5 per cent. For pressure-distribution tests on airships the analysis shows that the time lag and pressure change may be neglected.

  12. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    PubMed

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the normal distribution assumption can be successfully applied to MUAC. In light of this promising finding, further research is ongoing to evaluate the performance of a normal distribution based approach to estimating the prevalence of wasting using MUAC.

  13. Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.

    ERIC Educational Resources Information Center

    Parshall, Cynthia G.; Kromrey, Jeffrey D.

    1996-01-01

    Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)

  14. 40 Years of AEDC Development, Evolution and Application of Numerical Simulations for Integrated Test and Evaluation of Turbine Engine Turbomachinery Operability Issues

    DTIC Science & Technology

    2010-03-01

    distribution is unlimited.  Provide pretest prediction and posttest assessment of aircraft test matrix to optimize wind tunnel inlet testing...Development of Production Simulations ........................................................... 417 A.4 Research and Development Activities...174 Figure 3.175 Two-Stage HTSC as Tested at the Compressor Research Facility [3.125] ........ 174

  15. A Nonparametric Test for Homogeneity of Variances: Application to GPAs of Students across Academic Majors

    ERIC Educational Resources Information Center

    Bakir, Saad T.

    2010-01-01

    We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…

  16. Effect of carbide distribution on rolling-element fatigue life of AMS 5749

    NASA Technical Reports Server (NTRS)

    Parker, R. J.; Bamberger, E. N.

    1983-01-01

    Endurance tests with ball bearings made of corrosion resistant bearing steel which resulted in fatigue lives much lower than were predicted are discussed. Metallurgical analysis revealed an undesirable carbide distribution in the races. It was shown in accelerated fatigue tests in the RC rig that large, banded carbides can reduce rolling element fatigue life by a factor of approximately four. The early spalling failures on the bearing raceways are attributed to the large carbide size and banded distribution.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozturk, Fahrettin; Toros, Serkan; Evis, Zafer

    In this study, the diametral strength test of sintered hydroxyapatite was simulated by the finite element software, ABAQUS/Standard. Stress distributions on diametral test sample were determined. The effect of sintering temperature on stress distribution of hydroxyapatite was studied. It was concluded that high sintering temperatures did not reduce the stress on hydroxyapatite. It had a negative effect on stress distribution of hydroxyapatite after 1300 deg. C. In addition to the porosity, other factors (sintering temperature, presence of phases and the degree of crystallinity) affect the diametral strength of the hydroxyapatite.

  18. Performance of concrete members subjected to large hydrocarbon pool fires

    DOE PAGES

    Zwiers, Renata I.; Morgan, Bruce J.

    1989-01-01

    The authors discuss an investigation to determine analytically if the performance of concrete beams and columns in a hydrocarbon pool test fire would differ significantly from their performance in a standard test fire. The investigation consisted of a finite element analysis to obtain temperature distributions in typical cross sections, a comparison of the resulting temperature distribution in the cross section, and a strength analysis of a beam based on temperature distribution data. Results of the investigation are reported.

  19. NEAT: an efficient network enrichment analysis test.

    PubMed

    Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C

    2016-09-05

    Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).

  20. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  1. Adjusted regression trend test for a multicenter clinical trial.

    PubMed

    Quan, H; Capizzi, T

    1999-06-01

    Studies using a series of increasing doses of a compound, including a zero dose control, are often conducted to study the effect of the compound on the response of interest. For a one-way design, Tukey et al. (1985, Biometrics 41, 295-301) suggested assessing trend by examining the slopes of regression lines under arithmetic, ordinal, and arithmetic-logarithmic dose scalings. They reported the smallest p-value for the three significance tests on the three slopes for safety assessments. Capizzi et al. (1992, Biometrical Journal 34, 275-289) suggested an adjusted trend test, which adjusts the p-value using a trivariate t-distribution, the joint distribution of the three slope estimators. In this paper, we propose an adjusted regression trend test suitable for two-way designs, particularly for multicenter clinical trials. In a step-down fashion, the proposed trend test can be applied to a multicenter clinical trial to compare each dose with the control. This sequential procedure is a closed testing procedure for a trend alternative. Therefore, it adjusts p-values and maintains experimentwise error rate. Simulation results show that the step-down trend test is overall more powerful than a step-down least significant difference test.

  2. Pressure Distribution Tests on a Series of Clark Y Biplane Cellules with Special Reference to Stability

    NASA Technical Reports Server (NTRS)

    Noyes, Richard W

    1933-01-01

    The pressure distribution data discussed in this report represents the results of part of an investigation conducted on the factors affecting the aerodynamic safety of airplanes. The present tests were made on semispan, circular-tipped Clark Y airfoil models mounted in the conventional manner on a separation plane. Pressure readings were made simultaneously at all test orifices at each of 20 angles of attack between -8 degrees and +90 degrees. The results of the tests on each wing arrangement are compared on the bases of maximum normal force coefficient, lateral stability at a low rate of roll, and relative longitudinal stability. Tabular data are also presented giving the center of pressure location of each wing.

  3. Statistical Characterization of the Mechanical Parameters of Intact Rock Under Triaxial Compression: An Experimental Proof of the Jinping Marble

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo

    2016-12-01

    We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.

  4. Effect of Bimodal Grain Size Distribution on Scatter in Toughness

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Debalay; Strangwood, Martin; Davis, Claire

    2009-04-01

    Blunt-notch tests were performed at -160 °C to investigate the effect of a bimodal ferrite grain size distribution in steel on cleavage fracture toughness, by comparing local fracture stress values for heat-treated microstructures with uniformly fine, uniformly coarse, and bimodal grain structures. An analysis of fracture stress values indicates that bimodality can have a significant effect on toughness by generating high scatter in the fracture test results. Local cleavage fracture values were related to grain size distributions and it was shown that the largest grains in the microstructure, with an area percent greater than approximately 4 pct, gave rise to cleavage initiation. In the case of the bimodal grain size distribution, the large grains from both the “fine grain” and “coarse grain” population initiate cleavage; this spread in grain size values resulted in higher scatter in the fracture stress than in the unimodal distributions. The notch-bend test results have been used to explain the difference in scatter in the Charpy energies for the unimodal and bimodal ferrite grain size distributions of thermomechanically controlled rolled (TMCR) steel, in which the bimodal distribution showed higher scatter in the Charpy impact transition (IT) region.

  5. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  6. Distribution of Model-based Multipoint Heterogeneity Lod Scores

    PubMed Central

    Xing, Chao; Morris, Nathan; Xing, Guan

    2011-01-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ2 approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating the distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution 12χ02+12χ12, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. PMID:21104892

  7. Experimental study of flow distribution and pressure loss with circumferential inlet and outlet manifolds

    NASA Technical Reports Server (NTRS)

    Dittrich, R. T.

    1972-01-01

    Water flow tests with circumferential inlet and outlet manifolds were conducted to determine factors affecting fluid distribution and pressure losses. Various orifice sizes and manifold geometries were tested over a range of flow velocities. With inlet manifolds, flow distribution was related directly to orifice discharge coefficients. A correlation indicated that nonuniform distribution resulted when the velocity head ratio at the orifice was not in the range of constant discharge coefficient. With outlet manifolds, nonuniform flow was related to static pressure variations along the manifold. Outlet manifolds had appreciably greater pressure losses than comparable inlet manifolds.

  8. Load flow and state estimation algorithms for three-phase unbalanced power distribution systems

    NASA Astrophysics Data System (ADS)

    Madvesh, Chiranjeevi

    Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.

  9. Distributional fold change test – a statistical approach for detecting differential expression in microarray experiments

    PubMed Central

    2012-01-01

    Background Because of the large volume of data and the intrinsic variation of data intensity observed in microarray experiments, different statistical methods have been used to systematically extract biological information and to quantify the associated uncertainty. The simplest method to identify differentially expressed genes is to evaluate the ratio of average intensities in two different conditions and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed. This filtering approach is not a statistical test and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. At the same time the fold change by itself provide valuable information and it is important to find unambiguous ways of using this information in expression data treatment. Results A new method of finding differentially expressed genes, called distributional fold change (DFC) test is introduced. The method is based on an analysis of the intensity distribution of all microarray probe sets mapped to a three dimensional feature space composed of average expression level, average difference of gene expression and total variance. The proposed method allows one to rank each feature based on the signal-to-noise ratio and to ascertain for each feature the confidence level and power for being differentially expressed. The performance of the new method was evaluated using the total and partial area under receiver operating curves and tested on 11 data sets from Gene Omnibus Database with independently verified differentially expressed genes and compared with the t-test and shrinkage t-test. Overall the DFC test performed the best – on average it had higher sensitivity and partial AUC and its elevation was most prominent in the low range of differentially expressed features, typical for formalin-fixed paraffin-embedded sample sets. Conclusions The distributional fold change test is an effective method for finding and ranking differentially expressed probesets on microarrays. The application of this test is advantageous to data sets using formalin-fixed paraffin-embedded samples or other systems where degradation effects diminish the applicability of correlation adjusted methods to the whole feature set. PMID:23122055

  10. A versatile test for equality of two survival functions based on weighted differences of Kaplan-Meier curves.

    PubMed

    Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J

    2015-12-10

    With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Analysis of early life influences on cognitive development in childhood using multilevel ordinal models

    PubMed Central

    Li, Leah

    2012-01-01

    Summary Studies of cognitive development in children are often based on tests designed for specific ages. Examination of the changes of these scores over time may not be meaningful. This paper investigates the influence of early life factors on cognitive development using maths and reading test scores at ages 7, 11, and 16 years in a British birth cohort born in 1958. The distributions of these test scores differ between ages, for example, 20% participants scored the top mark in the reading test at 7 and the distribution of reading score at 16 is heavily skewed. In this paper, we group participants into 5 ordered categories, approximately 20% in each category according to their test scores at each age. Multilevel models for a repeated ordinal outcome are applied to relate the ordinal scale of maths and reading ability to early life factors. PMID:22661923

  12. Description and evaluation of an interference assessment for a slotted-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Kemp, William B., Jr.

    1991-01-01

    A wind-tunnel interference assessment method applicable to test sections with discrete finite-length wall slots is described. The method is based on high order panel method technology and uses mixed boundary conditions to satisfy both the tunnel geometry and wall pressure distributions measured in the slotted-wall region. Both the test model and its sting support system are represented by distributed singularities. The method yields interference corrections to the model test data as well as surveys through the interference field at arbitrary locations. These results include the equivalent of tunnel Mach calibration, longitudinal pressure gradient, tunnel flow angularity, wall interference, and an inviscid form of sting interference. Alternative results which omit the direct contribution of the sting are also produced. The method was applied to the National Transonic Facility at NASA Langley Research Center for both tunnel calibration tests and tests of two models of subsonic transport configurations.

  13. Distribution of indoor radon concentrations in Pennsylvania, 1990-2007

    USGS Publications Warehouse

    Gross, Eliza L.

    2013-01-01

    Median indoor radon concentrations aggregated according to geologic units and hydrogeologic settings are useful for drawing general conclusions about the occurrence of indoor radon in specific geologic units and hydrogeologic settings, but the associated data and maps have limitations. The aggregated indoor radon data have testing and spatial accuracy limitations due to lack of available information regarding testing conditions and the imprecision of geocoded test locations. In addition, the associated data describing geologic units and hydrogeologic settings have spatial and interpretation accuracy limitations, which are a result of using statewide data to define conditions at test locations and geologic data that represent a broad interpretation of geologic units across the State. As a result, indoor air radon concentration distributions are not proposed for use in predicting individual concentrations at specific sites nor for use as a decision-making tool for property owners to decide whether to test for indoor radon concentrations at specific property locations.

  14. Pedagogical Implications of Score Distribution Pattern and Learner Satisfaction in an Intensive TOEIC Course

    ERIC Educational Resources Information Center

    Kang, Che Chang

    2014-01-01

    The study aimed at investigating TOEIC score distribution patterns and learner satisfaction in an intensive TOEIC course and drew implications for pedagogical practice. A one-group pre-test post-test experiment and a survey on learner satisfaction were conducted on Taiwanese college EFL students (n = 50) in a case study. Results showed that the…

  15. Comparison of Program Effects: The Use of Mastery Scores.

    ERIC Educational Resources Information Center

    Yeh, Jennie P.; Moy, Raymond

    The setting of a cut-off score on a mastery test usually involves a consideration of one or more of the following elements: (1) the distribution of observed test scores; (2) the type of mastery criterion used; (3) the level of acceptable risks of mis-classification; (4) the loss of functions of mis-classifications; and (5) the distribution of true…

  16. l[subscript z] Person-Fit Index to Identify Misfit Students with Achievement Test Data

    ERIC Educational Resources Information Center

    Seo, Dong Gi; Weiss, David J.

    2013-01-01

    The usefulness of the l[subscript z] person-fit index was investigated with achievement test data from 20 exams given to more than 3,200 college students. Results for three methods of estimating ? showed that the distributions of l[subscript z] were not consistent with its theoretical distribution, resulting in general overfit to the item response…

  17. Monopropellant Thruster Development Using a Family of Micro Reactors

    DTIC Science & Technology

    2017-02-17

    Scharfe Gerald Gabrang In- Space Propulsion Branch AFRL/RQRS 2Distribution A: Approved for Public Release; Distribution Unlimited. PA# 17061. Outline...The Air Force Research Lab • Monopropellants for In- Space Propulsion • Near-Term Monopropellant Thruster Challenges • Supporting Test Requirements... Space , and Cyber Responsibilities. - Materiel Command: conducts research, development, testing and evaluation, and provides the acquisition and life

  18. Developing a Methodology for Risk-Informed Trade-Space Analysis in Acquisition

    DTIC Science & Technology

    2015-01-01

    73 6.10. Research, Development, Test, and Evaluation Cost Distribution, Technology 1 Mitigation of...6.11. Research, Development, Test, and Evaluation Cost Distribution, Technology 3 Mitigation of the Upgrade Alternative...courses of action, or risk- mitigation behaviors, which take place in the event that the technology is not developed by the mile- stone date (e.g

  19. A system for measuring the pulse height distribution of ultrafast photomultipliers

    NASA Technical Reports Server (NTRS)

    Abshire, J. B.

    1977-01-01

    A system for measuring the pulse height distribution of gigahertz bandwidth photomultipliers was developed. This system uses a sampling oscilloscope as a sample-hold circuit and has a bandwidth of 12 gigahertz. Test results are given for a static crossed-filed photomultiplier tested with a demonstration system. Calculations on system amplitude resolution capabilities are included for currently available system components.

  20. U.S.: proposed federal legislation to allow condom distribution and HIV testing in prison.

    PubMed

    Dolinsky, Anna

    2007-05-01

    Representative Barbara Lee (D-CA) is reintroducing legislation in the U.S. House of Representatives that would require federal correctional facilities to allow community organizations to distribute condoms and provide voluntary counselling and testing for HIV and STDs for inmates. The bill has been referred to the House Judiciary Committee's Subcommittee on Crime, Terrorism, and Homeland Security.

  1. Wind Tunnel Test of an RPV with Shape-Change Control Effector and Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Raney, David L.; Cabell, Randolph H.; Sloan, Adam R.; Barnwell, William G.; Lion, S. Todd; Hautamaki, Bret A.

    2004-01-01

    A variety of novel control effector concepts have recently emerged that may enable new approaches to flight control. In particular, the potential exists to shift the composition of the typical aircraft control effector suite from a small number of high authority, specialized devices (rudder, aileron, elevator, flaps), toward larger numbers of smaller, less specialized, distributed device arrays. The concept envisions effector and sensor networks composed of relatively small high-bandwidth devices able to simultaneously perform a variety of control functions using feedback from disparate data sources. To investigate this concept, a remotely piloted flight vehicle has been equipped with an array of 24 trailing edge shape-change effectors and associated pressure measurements. The vehicle, called the Multifunctional Effector and Sensor Array (MESA) testbed, was recently tested in NASA Langley's 12-ft Low Speed wind tunnel to characterize its stability properties, control authorities, and distributed pressure sensitivities for use in a dynamic simulation prior to flight testing. Another objective was to implement and evaluate a scheme for actively controlling the spanwise pressure distribution using the shape-change array. This report describes the MESA testbed, design of the pressure distribution controller, and results of the wind tunnel test.

  2. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  3. Bell Test over Extremely High-Loss Channels: Towards Distributing Entangled Photon Pairs between Earth and the Moon

    NASA Astrophysics Data System (ADS)

    Cao, Yuan; Li, Yu-Huai; Zou, Wen-Jie; Li, Zheng-Ping; Shen, Qi; Liao, Sheng-Kai; Ren, Ji-Gang; Yin, Juan; Chen, Yu-Ao; Peng, Cheng-Zhi; Pan, Jian-Wei

    2018-04-01

    Quantum entanglement was termed "spooky action at a distance" in the well-known paper by Einstein, Podolsky, and Rosen. Entanglement is expected to be distributed over longer and longer distances in both practical applications and fundamental research into the principles of nature. Here, we present a proposal for distributing entangled photon pairs between Earth and the Moon using a Lagrangian point at a distance of 1.28 light seconds. One of the most fascinating features in this long-distance distribution of entanglement is as follows. One can perform the Bell test with human supplying the random measurement settings and recording the results while still maintaining spacelike intervals. To realize a proof-of-principle experiment, we develop an entangled photon source with 1 GHz generation rate, about 2 orders of magnitude higher than previous results. Violation of Bell's inequality was observed under a total simulated loss of 103 dB with measurement settings chosen by two experimenters. This demonstrates the feasibility of such long-distance Bell test over extremely high-loss channels, paving the way for one of the ultimate tests of the foundations of quantum mechanics.

  4. 47 CFR 74.13 - Equipment tests.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Equipment tests. 74.13 Section 74.13 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES EXPERIMENTAL RADIO, AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES General; Rules Applicable to All Services in Part 74 § 74.13 Equipment tests. (...

  5. 30 CFR 75.821 - Testing, examination and maintenance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Underground High-Voltage Distribution High-Voltage Longwalls § 75.821 Testing, examination and maintenance. (a) At least once every 7 days, a... must test and examine each unit of high-voltage longwall equipment and circuits to determine that...

  6. 45 CFR 78.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... provided that such abstinence is documented by the results of periodic urine drug testing conducted during that period; and provided further that such drug testing is conducted using an immunoassay test approved by the Food and Drug Administration for commercial distribution or, in the case of a State offense...

  7. 45 CFR 78.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... provided that such abstinence is documented by the results of periodic urine drug testing conducted during that period; and provided further that such drug testing is conducted using an immunoassay test approved by the Food and Drug Administration for commercial distribution or, in the case of a State offense...

  8. 45 CFR 78.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... provided that such abstinence is documented by the results of periodic urine drug testing conducted during that period; and provided further that such drug testing is conducted using an immunoassay test approved by the Food and Drug Administration for commercial distribution or, in the case of a State offense...

  9. 45 CFR 78.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... provided that such abstinence is documented by the results of periodic urine drug testing conducted during that period; and provided further that such drug testing is conducted using an immunoassay test approved by the Food and Drug Administration for commercial distribution or, in the case of a State offense...

  10. 45 CFR 78.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... provided that such abstinence is documented by the results of periodic urine drug testing conducted during that period; and provided further that such drug testing is conducted using an immunoassay test approved by the Food and Drug Administration for commercial distribution or, in the case of a State offense...

  11. Statistics of sampling for microbiological testing of foodborne pathogens

    USDA-ARS?s Scientific Manuscript database

    Despite the many recent advances in protocols for testing for pathogens in foods, a number of challenges still exist. For example, the microbiological safety of food cannot be completely ensured by testing because microorganisms are not evenly distributed throughout the food. Therefore, since it i...

  12. Development of Drop/Shock Test in Microelectronics and Impact Dynamic Analysis for Uniform Board Response

    NASA Astrophysics Data System (ADS)

    Kallolimath, Sharan Chandrashekar

    For the past several years, many researchers are constantly developing and improving board level drop test procedures and specifications to quantify the solder joint reliability performance of consumer electronics products. Predictive finite element analysis (FEA) by utilizing simulation software has become widely acceptable verification method which can reduce time and cost of the real-time test process. However, due to testing and metrological limitations it is difficult not only to simulate exact drop condition and capture critical measurement data but also tedious to calibrate the system to improve test methods. Moreover, some of the important ever changing factors such as board flexural rigidity, damping, drop height, and drop orientation results in non-uniform stress/strain distribution throughout the test board. In addition, one of the most challenging tasks is to quantify uniform stress and strain distribution throughout the test board and identify critical failure factors. The major contributions of this work are in the four aspects of the drop test in electronics as following. First of all, an analytical FEA model was developed to study the board natural frequencies and responses of the system with the consideration of dynamic stiffness, damping behavior of the material and effect of impact loading condition. An approach to find the key parameters that affect stress and strain distributions under predominate mode responses was proposed and verified with theoretical solutions. Input-G method was adopted to study board response behavior and cut boundary interpolation methods was used to analyze local model solder joint stresses with the development of global/local FEA model in ANSYS software. Second, no ring phenomenon during the drop test was identified theoretically when the test board was modeled as both discrete system and continuous system. Numerical analysis was then conducted by FEA method for detailed geometry of attached chips with solder-joints. No ring test conditions was proposed and verified for the current widely used JEDEC standard. The significance of impact loading parameters such as pulse magnitude, pulse duration, pulse shapes and board dynamic parameter such as linear hysteretic damping and dynamic stiffness were discussed. Third, Kirchhoff's plate theory by principle of minimum potential energy was adopted to develop the FEA formulation to consider the effect of material hysteretic damping for the currently used JEDEC board test and proposed no-ring response test condition. Fourth, a hexagonal symmetrical board model was proposed to address the uniform stress and strain distribution throughout the test board and identify the critical failure factors. Dynamic stress and strain of the hexagonal board model were then compared with standard JEDEC board for both standard and proposed no-ring test conditions. In general, this line of research demonstrates that advanced techniques of FEA analysis can provide useful insights concerning the optimal design of drop test in microelectronics.

  13. Development of an Agent-Based Model to Investigate the Impact of HIV Self-Testing Programs on Men Who Have Sex With Men in Atlanta and Seattle.

    PubMed

    Luo, Wei; Katz, David A; Hamilton, Deven T; McKenney, Jennie; Jenness, Samuel M; Goodreau, Steven M; Stekler, Joanne D; Rosenberg, Eli S; Sullivan, Patrick S; Cassels, Susan

    2018-06-29

    In the United States HIV epidemic, men who have sex with men (MSM) remain the most profoundly affected group. Prevention science is increasingly being organized around HIV testing as a launch point into an HIV prevention continuum for MSM who are not living with HIV and into an HIV care continuum for MSM who are living with HIV. An increasing HIV testing frequency among MSM might decrease future HIV infections by linking men who are living with HIV to antiretroviral care, resulting in viral suppression. Distributing HIV self-test (HIVST) kits is a strategy aimed at increasing HIV testing. Our previous modeling work suggests that the impact of HIV self-tests on transmission dynamics will depend not only on the frequency of tests and testers' behaviors but also on the epidemiological and testing characteristics of the population. The objective of our study was to develop an agent-based model to inform public health strategies for promoting safe and effective HIV self-tests to decrease the HIV incidence among MSM in Atlanta, GA, and Seattle, WA, cities representing profoundly different epidemiological settings. We adapted and extended a network- and agent-based stochastic simulation model of HIV transmission dynamics that was developed and parameterized to investigate racial disparities in HIV prevalence among MSM in Atlanta. The extension comprised several activities: adding a new set of model parameters for Seattle MSM; adding new parameters for tester types (ie, regular, risk-based, opportunistic-only, or never testers); adding parameters for simplified pre-exposure prophylaxis uptake following negative results for HIV tests; and developing a conceptual framework for the ways in which the provision of HIV self-tests might change testing behaviors. We derived city-specific parameters from previous cohort and cross-sectional studies on MSM in Atlanta and Seattle. Each simulated population comprised 10,000 MSM and targeted HIV prevalences are equivalent to 28% and 11% in Atlanta and Seattle, respectively. Previous studies provided sufficient data to estimate the model parameters representing nuanced HIV testing patterns and HIV self-test distribution. We calibrated the models to simulate the epidemics representing Atlanta and Seattle, including matching the expected stable HIV prevalence. The revised model facilitated the estimation of changes in 10-year HIV incidence based on counterfactual scenarios of HIV self-test distribution strategies and their impact on testing behaviors. We demonstrated that the extension of an existing agent-based HIV transmission model was sufficient to simulate the HIV epidemics among MSM in Atlanta and Seattle, to accommodate a more nuanced depiction of HIV testing behaviors than previous models, and to serve as a platform to investigate how HIV self-tests might impact testing and HIV transmission patterns among MSM in Atlanta and Seattle. In our future studies, we will use the model to test how different HIV self-test distribution strategies might affect HIV incidence among MSM. ©Wei Luo, David A Katz, Deven T Hamilton, Jennie McKenney, Samuel M Jenness, Steven M Goodreau, Joanne D Stekler, Eli S Rosenberg, Patrick S Sullivan, Susan Cassels. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 29.06.2018.

  14. Long microwave delay fiber-optic link for radar testing

    NASA Astrophysics Data System (ADS)

    Newberg, I. L.; Gee, C. M.; Thurmond, G. D.; Yen, H. W.

    1990-05-01

    A long fiberoptic delay line is used as a radar repeater to improve radar testing capabilities. The first known generation of 152 microsec delayed ideal target at X-band (10 GHz) frequencies having the phase stability and signal-to-noise ratio (SNR) needed for testing modern high-resolution Doppler radars is demonstrated with a 31.6-km experimental externally modulated fiberoptic link with a distributed-feedback (DFB) laser. The test application, link configuration, and link testing are discussed.

  15. Structural Qualification Testing of the WindSat Payload Using Sine Bursts Near Structural Resonance

    NASA Technical Reports Server (NTRS)

    Pontius, Jim; Barnes, Donald; Broduer, Steve (Technical Monitor)

    2001-01-01

    Sine burst tests are often used for structural qualification of space flight hardware. In most instances, the driving frequency of the shaker is specified far below the structure's first resonant mode, such that the entire test article sees uniform acceleration. For large structures, this limits qualification testing to lower parts of the structure, or else it over-tests the lower structure to achieve qualification of the upper structure. The WindSat payload, a 10.5 foot tall graphite/epoxy, titanium, and aluminum radiometer, experiences accelerations at the six foot diameter reflector nearly four times that at the spacecraft interface. Due to size of the payload, the number of bonded joints, and the lightweight reflector support structure design and construction, using static pull testing to qualify all of the bonded joints in the upper structure would result in large, expensive, and extensive test fixturing. Sine burst testing near the first two structural resonant modes was performed on the WindSat payload to achieve the correct load factor distribution up the stack for structural qualification. In this presentation, how finite element method (FEM) sine burst predictions were used in conjunction with low level random and sine burst tests to achieve correct qualification test load factor distribution on the WindSat payload is discussed. Also presented is the risk mitigation approach for using the uncorrelated FEM in this procedure.

  16. Synthesis and quality control of fluorodeoxyglucose and performance assessment of Siemens MicroFocus 220 small animal PET scanner

    NASA Astrophysics Data System (ADS)

    Phaterpekar, Siddhesh Nitin

    The scope of this article is to cover the synthesis and quality control procedures involved in production of Fludeoxyglucose (18F--FDG). The article also describes the cyclotron production of 18F radioisotope and gives a brief overview on operations and working of a fixed energy medical cyclotron. The quality control procedures for FDG involve radiochemical and radionuclidic purity tests, pH tests, chemical purity tests, sterility tests, endotoxin tests. Each of these procedures were carried out for multiple batches of FDG with a passing rate of 95% among 20 batches. The article also covers the quality assurance steps for the Siemens MicroPET Focus 220 Scanner using a Jaszczak phantom. We have carried out spatial resolution tests on the scanner, with an average transaxial resolution of 1.775mm with 2-3mm offset. Tests involved detector efficiency, blank scan sinograms and transmission sinograms. A series of radioactivity distribution tests are also carried out on a uniform phantom, denoting the variations in radioactivity and uniformity by using cylindrical ROIs in the transverse region of the final image. The purpose of these quality control tests is to make sure the manufactured FDG is biocompatible with the human body. Quality assurance tests are carried on PET scanners for efficient performance, and to make sure the quality of images acquired is according to the radioactivity distribution in the subject of interest.

  17. Thermal-Structural Analysis of PICA Tiles for Solar Tower Test

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Empey, Daniel M.; Squire, Thomas H.

    2009-01-01

    Thermal protection materials used in spacecraft heatshields are subjected to severe thermal and mechanical loading environments during re-entry into earth atmosphere. In order to investigate the reliability of PICA tiles in the presence of high thermal gradients as well as mechanical loads, the authors designed and conducted solar-tower tests. This paper presents the design and analysis work for this tests series. Coupled non-linear thermal-mechanical finite element analyses was conducted to estimate in-depth temperature distribution and stress contours for various cases. The first set of analyses performed on isolated PICA tile showed that stresses generated during the tests were below the PICA allowable limit and should not lead to any catastrophic failure during the test. The tests results were consistent with analytical predictions. The temperature distribution and magnitude of the measured strains were also consistent with predicted values. The second test series is designed to test the arrayed PICA tiles with various gap-filler materials. A nonlinear contact method is used to model the complex geometry with various tiles. The analyses for these coupons predict the stress contours in PICA and inside gap fillers. Suitable mechanical loads for this architecture will be predicted, which can be applied during the test to exceed the allowable limits and demonstrate failure modes. Thermocouple and strain-gauge data obtained from the solar tower tests will be used for subsequent analyses and validation of FEM models.

  18. Power of tests of normality for detecting contaminated normal samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thode, H.C. Jr.; Smith, L.A.; Finch, S.J.

    1981-01-01

    Seventeen tests of normality or goodness of fit were evaluated for power at detecting a contaminated normal sample. This study used 1000 replications each of samples of size 12, 17, 25, 33, 50, and 100 from six different contaminated normal distributions. The kurtosis test was the most powerful over all sample sizes and contaminations. The Hogg and weighted Kolmogorov-Smirnov tests were second. The Kolmogorov-Smirnov, chi-squared, Anderson-Darling, and Cramer-von-Mises tests had very low power at detecting contaminated normal random variables. Tables of the power of the tests and the power curves of certain tests are given.

  19. Pressure-Distribution Measurements on the Tail Surfaces of a Rotating Model of the Design BFW - M31

    NASA Technical Reports Server (NTRS)

    Kohler, M.; Mautz, W.

    1949-01-01

    In order to obtain insight into the flow conditions on tail surfaces on airplanes during spins, pressure-distribution measurements were performed on a rotating model of the design BFW-M31. For the time being, the tests were made for only one angle of attack (alpha = 60 degrees) and various angles of yaw and rudder angles. The results of these measurements are given; the construction of the model, and the test arrangement used are described. Measurements to be performed later and alterations planned in the test arrangement are pointed out.

  20. Test Results From a Simulated High-Voltage Lunar Power Transmission Line

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur; Hervol, David

    2008-01-01

    The Alternator Test Unit (ATU) in the Lunar Power System Facility (LPSF) located at the NASA Glenn Research Center (GRC) in Cleveland, Ohio was modified to simulate high-voltage transmission capability. The testbed simulated a 1 km transmission cable length from the ATU to the LPSF using resistors and inductors installed between the distribution transformers. Power factor correction circuitry was used to compensate for the reactance of the distribution system to improve the overall power factor. This test demonstrated that a permanent magnet alternator can successfully provide high-frequency ac power to a lunar facility located at a distance.

  1. Test Results from a Simulated High Voltage Lunar Power Transmission Line

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur; Hervol, David

    2008-01-01

    The Alternator Test Unit (ATU) in the Lunar Power System Facility (LPSF) located at the NASA Glenn Research Center (GRC) in Cleveland, OH was modified to simulate high voltage transmission capability. The testbed simulated a 1 km transmission cable length from the ATU to the LPSF using resistors and inductors installed between the distribution transformers. Power factor correction circuitry was used to compensate for the reactance of the distribution system to improve the overall power factor. This test demonstrated that a permanent magnet alternator can successfully provide high frequency AC power to a lunar facility located at a distance.

  2. Wind tunnel pressure distribution tests on a series of biplane wing models Part II : effects of changes in decalage, dihedral, sweepback and overhang

    NASA Technical Reports Server (NTRS)

    Knight, Montgomery; Noyes, Richard W

    1929-01-01

    This preliminary report furnishes information on the changes in the forces on each wing of a biplane cellule when the decalage, dihedral, sweepback and overhang are separately varied. The data were obtained from pressure distribution tests made in the Atmospheric Wind Tunnel of the Langley Memorial Aeronautical Laboratory. Since each test was carried up to 90 degree angle of attack, the results may be used in the study of stalled flight and of spinning and in the structural design of biplane wings.

  3. The combustion of different air distribution of foursquare tangential circle boiler by numerical simulation

    NASA Astrophysics Data System (ADS)

    Guo, Yue; Du, Lei; Jiang, Long; Li, Qing; Zhao, Zhenning

    2017-01-01

    In this paper, the combustion and NOx emission characteristics of a 300 MW tangential boiler are simulated, we obtain the flue gas velocity field in the hearth, component concentration distribution of temperature field and combustion products, and the speed, temperature, concentration of oxygen and NOx emissions compared with the test results in the waisting air distribution conditions, found the simulation values coincide well with the test value, to verify the rationality of the model. At the same time, the flow field in the furnace, the combustion and the influence of NOx emission characteristics are simulated by different conditions, including compared with primary zone secondary waisting air distribution, uniform air distribution and pagodas go down air distribution, the results show that, waisting air distribution is useful to reduce NOx emissions.

  4. An Experimental Test of the Concentration Index

    PubMed Central

    Bleichrodt, Han; Rohde, Kirsten I.M.; Van Ourti, Tom

    2016-01-01

    The concentration index is widely used to measure income-related inequality in health. No insight exists, however, whether the concentration index connects with people's preferences about distributions of income and health and whether a reduction in the concentration index reflects an increase in social welfare. We explored this question by testing the central assumption underlying the concentration index and found that it was systematically violated. We also tested the validity of alternative health inequality measures that have been proposed in the literature. Our data showed that decreases in the spread of income and health were considered socially desirable, but decreases in the correlation between income and health not necessarily. Support for a condition implying that the inequality in the distribution of income and in the distribution of health can be considered separately was mixed. PMID:22307035

  5. Pressure distribution data from tests of 2.29-meter (7.5-ft.) span EET high-lift research model in Langley 4- by 7-meter tunnel

    NASA Technical Reports Server (NTRS)

    Morgan, H. L., Jr.

    1982-01-01

    A 2.29 m (7.5 ft.) span high-lift research model equipped with full-span leading-edge slat and part-span double-slotted trailing-edge flap was tested in the Langley 4- by 7-Meter Tunnel to determine the low speed performance characteristics of a representative high aspect ratio suprcritical wing. These tests were performed in support of the Energy Efficient Transport (EET) program which is one element of the Aircraft Energy Efficiency (ACEE) project. Static longitudinal forces and moments and chordwise pressure distributions at three spanwise stations were measured for cruise, climb, two take-off flap, and two landing flap wing configurations. The tabulated and plotted pressure distribution data is presented without analysis or discussion.

  6. Pressure distribution data from tests of 2.29 M (7.5 feet) span EET high-lift transport aircraft model in the Ames 12-foot pressure tunnel

    NASA Technical Reports Server (NTRS)

    Kjelgaard, S. O.; Morgan, H. L., Jr.

    1983-01-01

    A high-lift transport aircraft model equipped with full-span leading-edge slat and part-span double-slotted trailing-edge flap was tested in the Ames 12-ft pressure tunnel to determine the low-speed performance characteristics of a representative high-aspect-ratio supercritical wing. These tests were performed in support of the Energy Efficient Transport (EET) program which is one element of the Aircraft Energy Efficiency (ACEE) project. Static longitudinal forces and moments and chordwise pressure distributions at three spanwise stations were measured for cruise, climb, two take-off flap, and two landing flap wing configurations. The tabulated and plotted pressure distribution data is presented without analysis or discussion.

  7. The distribution and frequency of blood lipid testing by sociodemographic status among adults in Auckland, New Zealand.

    PubMed

    Exeter, Daniel J; Moss, Lauren; Zhao, Jinfeng; Kyle, Cam; Riddell, Tania; Jackson, Rod; Wells, Susan

    2015-09-01

    National cardiovascular disease (CVD) guidelines recommend that adults have cholesterol levels monitored regularly. However, little is known about the extent and equity of cholesterol testing in New Zealand. To investigate the distribution and frequency of blood lipid testing by sociodemographic status in Auckland, New Zealand. We anonymously linked five national health datasets (primary care enrolment, laboratory tests, pharmaceuticals, hospitalisations and mortality) to identify adults aged ≥25 years without CVD or diabetes who had their lipids tested in 2006-2010, by age, gender, ethnicity and area of residence and deprivation. Multivariate logistic regression was used to estimate the likelihood of testing associated with these factors. Of the 627 907 eligible adults, 66.3% had at least one test between 2006 and 2010. Annual testing increased from 24.7% in 2006 to 35.1% in 2010. Testing increased with age similarly for men and women. Indian people were 87% more likely than New Zealand European and Others (NZEO) to be tested, Pacific people 8% more likely, but rates for Maori were similar to NZEO. There was marked variation within the region, with residents of the most deprived areas less likely to be tested than residents in least deprived areas. Understanding differences within and between population groups supports the development of targeted strategies for better service utilisation. While lipid testing has increased, sociodemographic variations persist by place of residence, and deprivation. Of the high CVD risk populations, lipid testing for Maori and Pacific is not being conducted according to need.

  8. Testing the shape of distributions of weather data

    NASA Astrophysics Data System (ADS)

    Baccon, Ana L. P.; Lunardi, José T.

    2016-08-01

    The characterization of the statistical distributions of observed weather data is of crucial importance both for the construction and for the validation of weather models, such as weather generators (WG's). An important class of WG's (e.g., the Richardson-type generators) reduce the time series of each variable to a time series of its residual elements, and the residuals are often assumed to be normally distributed. In this work we propose an approach to investigate if the shape assumed for the distribution of residuals is consistent or not with the observed data of a given site. Specifically, this procedure tests if the same distribution shape for the residuals noise is maintained along the time. The proposed approach is an adaptation to climate time series of a procedure first introduced to test the shapes of distributions of growth rates of business firms aggregated in large panels of short time series. We illustrate the procedure by applying it to the residuals time series of maximum temperature in a given location, and investigate the empirical consistency of two assumptions, namely i) the most common assumption that the distribution of the residuals is Gaussian and ii) that the residuals noise has a time invariant shape which coincides with the empirical distribution of all the residuals noise of the whole time series pooled together.

  9. Mobile Uninterruptible Power Supply

    NASA Technical Reports Server (NTRS)

    Mears, Robert L.

    1990-01-01

    Proposed mobile unit provides 20 kVA of uninterruptible power. Used with mobile secondary power-distribution centers to provide power to test equipment with minimal cabling, hazards, and obstacles. Wheeled close to test equipment and system being tested so only short cable connections needed. Quickly moved and set up in new location. Uninterruptible power supply intended for tests which data lost or equipment damaged during even transient power failure.

  10. Developing a Numerical Ability Test for Students of Education in Jordan: An Application of Item Response Theory

    ERIC Educational Resources Information Center

    Abed, Eman Rasmi; Al-Absi, Mohammad Mustafa; Abu shindi, Yousef Abdelqader

    2016-01-01

    The purpose of the present study is developing a test to measure the numerical ability for students of education. The sample of the study consisted of (504) students from 8 universities in Jordan. The final draft of the test contains 45 items distributed among 5 dimensions. The results revealed that acceptable psychometric properties of the test;…

  11. Experimental Testing of a Van De Graaff Generator as an Electromagnetic Pulse Generator

    DTIC Science & Technology

    2016-07-01

    EXPERIMENTAL TESTING OF A VAN DE GRAAFF GENERATOR AS AN ELECTROMAGNETIC PULSE GENERATOR THESIS...protection in the United States AFIT-ENP-MS-16-S-075 EXPERIMENTAL TESTING OF A VAN DE GRAAFF GENERATOR AS AN ELECTROMAGNETIC PULSE GENERATOR...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENP-MS-16-S-075 EXPERIMENTAL TESTING OF A VAN DE GRAAFF GENERATOR AS AN ELECTROMAGNETIC PULSE GENERATOR

  12. 10 CFR Appendix A to Subpart K of... - Uniform Test Method for Measuring the Energy Consumption of Distribution Transformers

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... test more than one unit of a basic model to determine the efficiency of that basic model, the... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2013-01-01 2013-01-01 false Uniform Test Method for Measuring the Energy Consumption...

  13. 10 CFR Appendix A to Subpart K of... - Uniform Test Method for Measuring the Energy Consumption of Distribution Transformers

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... test more than one unit of a basic model to determine the efficiency of that basic model, the... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2012-01-01 2012-01-01 false Uniform Test Method for Measuring the Energy Consumption...

  14. 10 CFR Appendix A to Subpart K of... - Uniform Test Method for Measuring the Energy Consumption of Distribution Transformers

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... test more than one unit of a basic model to determine the efficiency of that basic model, the... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2010-01-01 2010-01-01 false Uniform Test Method for Measuring the Energy Consumption...

  15. 10 CFR Appendix A to Subpart K of... - Uniform Test Method for Measuring the Energy Consumption of Distribution Transformers

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... test more than one unit of a basic model to determine the efficiency of that basic model, the... one ampere and the test current is limited to 15 percent of the winding current. Connect the... 10 Energy 3 2011-01-01 2011-01-01 false Uniform Test Method for Measuring the Energy Consumption...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denslow, Kayte M.; Bontha, Jagannadha R.; Adkins, Harold E.

    This document presents the visual and ultrasonic PulseEcho critical velocity test results obtained from the System Performance test campaign that was completed in September 2012 with the Remote Sampler Demonstration (RSD)/Waste Feed Flow Loop cold-test platform located at the Monarch test facility in Pasco, Washington. This report is intended to complement and accompany the report that will be developed by WRPS on the design of the System Performance simulant matrix, the analysis of the slurry test sample concentration and particle size distribution (PSD) data, and the design and construction of the RSD/Waste Feed Flow Loop cold-test platform.

  17. Examining System-Wide Impacts of Solar PV Control Systems with a Power Hardware-in-the-Loop Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Tess L.; Fuller, Jason C.; Schneider, Kevin P.

    2014-06-08

    High penetration levels of distributed solar PV power generation can lead to adverse power quality impacts, such as excessive voltage rise, voltage flicker, and reactive power values that result in unacceptable voltage levels. Advanced inverter control schemes have been developed that have the potential to mitigate many power quality concerns. However, local closed-loop control may lead to unintended behavior in deployed systems as complex interactions can occur between numerous operating devices. To enable the study of the performance of advanced control schemes in a detailed distribution system environment, a test platform has been developed that integrates Power Hardware-in-the-Loop (PHIL) withmore » concurrent time-series electric distribution system simulation. In the test platform, GridLAB-D, a distribution system simulation tool, runs a detailed simulation of a distribution feeder in real-time mode at the Pacific Northwest National Laboratory (PNNL) and supplies power system parameters at a point of common coupling. At the National Renewable Energy Laboratory (NREL), a hardware inverter interacts with grid and PV simulators emulating an operational distribution system. Power output from the inverters is measured and sent to PNNL to update the real-time distribution system simulation. The platform is described and initial test cases are presented. The platform is used to study the system-wide impacts and the interactions of inverter control modes—constant power factor and active Volt/VAr control—when integrated into a simulated IEEE 8500-node test feeder. We demonstrate that this platform is well-suited to the study of advanced inverter controls and their impacts on the power quality of a distribution feeder. Additionally, results are used to validate GridLAB-D simulations of advanced inverter controls.« less

  18. Performance test for a solar water heater

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Two reports describe procedures and results of performance tests on domestic solar powered hot water system. Performance tests determine amount of energy collected by system, amount of energy delivered to solar source, power required to operate system and maintain proper tank temperature, overall system efficiency, and temperature distribution in tank.

  19. 49 CFR 178.975 - Top lift test.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the test. For all Large Packagings design types designed to be lifted from the top, there may be no permanent deformation which renders the Large Packagings unsafe for transport and no loss of contents. ... load being evenly distributed. (c) Test method. (1) A Large Packaging must be lifted in the manner for...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hacke, P.; Terwilliger, K.; Koch, S.

    Three crystalline silicon module designs were distributed in five replicas each to five laboratories for testing according to the IEC 62804 (Committee Draft) system voltage durability qualification test for crystalline silicon photovoltaic (PV) modules. The stress tests were performed in environmental chambers at 60 degrees C, 85% relative humidity, 96 h, and with module nameplate system voltage applied.

  1. SIMULATIONS OF TWO-WELL TRACER TESTS IN STRATIFIED AQUIFERS AT THE CHALK RIVER AND THE MOBILE SITES

    EPA Science Inventory

    A simulation of two-well injection-withdrawal tracer tests in stratified granular aquifers is presented for two widely separated sites substantially different in terms of vertical distributions of hydraulic conductivity, well spacings, flow rates, test durations and tracer travel...

  2. Pre-testing Orientation for the Disadvantaged.

    ERIC Educational Resources Information Center

    Mihalka, Joseph A.

    A pre-testing orientation was incorporated into the Work Incentives Program, a pre-vocational program for disadvantaged youth. Test-taking skills were taught in seven and one half hours of instruction and a variety of methods were used to provide a sequential experience with distributed learning, positive reinforcement, and immediate feedback of…

  3. 76 FR 59751 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ... with FINRA's practice of including ``pre-test'' questions on certain qualification examinations, which... scoring purposes, each examination includes 10 additional, unidentified pre-test questions that do not... of which are scored. The 10 pre-test questions are randomly distributed throughout the examination...

  4. 78 FR 42581 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-16

    ... practice of including ``pre-test'' questions on certain qualification examinations, which is designed to..., the examination includes 10 additional, unidentified pre-test questions that do not contribute towards... scored. The 10 pre-test questions are randomly distributed throughout the examination. Availability of...

  5. 76 FR 55443 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-07

    ..., each examination includes 10 additional, unidentified ``pre-test'' questions that do not contribute towards the candidate's score. The 10 pre-test questions are randomly distributed throughout the... customers, the integrity of the marketplace or the public. The examination will test applicants on general...

  6. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    NASA Astrophysics Data System (ADS)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  7. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  8. Comparing Latent Distributions.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    1980-01-01

    The problem of comparing the latent abilities of groups of individuals (as opposed to their observable test scores) is considered. Tests of equality of means, variances, and longitudinal applications are discussed. (JKS)

  9. [The Freiburg monosyllable word test in postoperative cochlear implant diagnostics].

    PubMed

    Hey, M; Brademann, G; Ambrosch, P

    2016-08-01

    The Freiburg monosyllable word test represents a central tool of postoperative cochlear implant (CI) diagnostics. The objective of this study is to test the equivalence of different word lists by analysing word comprehension. For patients whose CI has been implanted for more than 5 years, the distribution of suprathreshold speech intelligibility outcomes will also be analysed. In a retrospective data analysis, speech understanding for 626 CI users word correct scores were evaluated using a total of 5211 lists with 20 words each. The analysis of word comprehension within each list shows differences in mean and in the kind of distribution function. There are lists which show a significant difference of their mean word recognition to the overall mean. The Freiburg monosyllable word test is easy to administer at suprathreshold speech level for CI recipients, and typically has a saturation level above 80 %. The Freiburg monosyllable word test can be performed successfully by the majority of CI patients. The limited balance of the test lists elicits the conclusion that an adaptive test procedure with the Freiburg monosyllable test does not make sense. The Freiburg monosyllable test can be restructured by resorting all words across lists, or by omitting individual words of a test list to increase the reliability of the test. The results show that speech intelligibility in quiet should also be investigated in CI recipients al levels below 70 dB.

  10. Estimation of lifetime distributions on 1550-nm DFB laser diodes using Monte-Carlo statistic computations

    NASA Astrophysics Data System (ADS)

    Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc

    2004-09-01

    High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.

  11. Single well thermal tracer test, a new experimental set up for characterizing thermal transport in fractured media

    NASA Astrophysics Data System (ADS)

    de La Bernardie, Jérôme; Bour, Olivier; Guihéneuf, Nicolas; Chatton, Eliot; Labasque, Thierry; Longuevergne, Laurent; Le Lay, Hugo; Koch, Floriant; Gerard, Marie-Françoise; Le Borgne, Tanguy

    2017-04-01

    Thermal transport in fractured media depends on the hydrological properties of fractures and thermal characteristics of rock. Tracer tests using heat as tracer can thus be a good alternative to characterize fractured media for shallow geothermal needs. This study investigates the possibility of implementing a new thermal tracer test set up, the single well thermal tracer test, to characterize hydraulic and thermal transport properties of fractured crystalline rock. The experimental setup is based on injecting hot water in a fracture isolated by a double straddle packer in the borehole while pumping and monitoring the temperature in a fracture crossing the same borehole at greater elevation. One difficulty comes from the fact that injection and withdrawal are achieved in the same borehole involving thermal losses along the injection tube that may disturb the heat recovery signal. To be able to well localize the heat influx, we implemented a Fiber-Optic Distributed Temperature Sensing (FO-DTS) which allows the temperature monitoring with high spatial and temporal resolution (29 centimeters and 30 seconds respectively). Several tests, at different pumping and injection rates, were performed in a crystalline rock aquifer at the experimental site of Ploemeur (H+ observatory network). We show through signal processing how the thermal breakthrough may be extracted thanks to Fiber-Optic distributed temperature measurements. In particular, we demonstrate how detailed distributed temperature measurements were useful to identify different inflows and to estimate how much heat was transported and stored within the fractures network. Thermal breakthrough curves of single well thermal tracer tests were then interpreted with a simple analytical model to characterize hydraulic and thermal characteristics of the fractured media. We finally discuss the advantages of these tests compared to cross-borehole thermal tracer tests.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Lirong; Truex, Michael J.; Kananizadeh, Negin

    In situ anaerobic biological processes are widely applied for dechlorination of chlorinated solvents in groundwater. A wide range of organic substrates have been tested and applied to support the dechlorination processes. Vegetable oils are a promising substrate and have been shown to induce effective dechlorination, have limited geochemical impacts, and good longevity. Distribution of vegetable oil in the subsurface, because it is a non-aqueous phase material, has typically been addressed by creating emulsified oil solutions. In this study, inexpensive waste vegetable oils were suspended in a xanthan gum solution, a shear-thinning fluid, as an alternative oil delivery mechanism. The stability,more » oil droplet size and distribution, and rheological behavior of the oil suspensions that are created in the xanthan solutions were studied in batch experiments. The injectability of the suspensions and oil distribution in porous medium were evaluated in column tests. Numerical modeling of the oil droplet transport and distribution in porous media was conducted to help interpret the column-test data. Batch studies showed that simple mixing of vegetable oil and xanthan solution produced stable suspensions of the oil as micron-size droplets. The mixture rheology retains shear-thinning properties that facilitate improved uniformity of substrate distribution in heterogeneous aquifers. Column tests demonstrated successful injection of the vegetable oil suspension into porous medium. This study provided evidence that vegetable oil suspensions in xanthan are a potential substrate to support in situ anaerobic bioremediation with favorable injection properties.« less

  13. A New Polarimetric Classification Approach Evaluated for Agricultural Crops

    NASA Astrophysics Data System (ADS)

    Hoekman, D.

    2003-04-01

    Statistical properties of the polarimetric backscatter behaviour for a single homogeneous area are described by the Wishart distribution or its marginal distributions. These distributions do not necessarily well describe the statistics for a collection of homogeneous areas of the same class because of variation in, for example, biophysical parameters. Using Kolmogorov-Smirnov (K-S) tests of fit it is shown that, for example, the Beta distribution is a better descriptor for the coherence magnitude, and the log-normal distribution for the backscatter level. An evaluation is given for a number of agricultural crop classes, grasslands and fruit tree plantations at the Flevoland test site, using an AirSAR (C-, L- and P- band polarimetric) image of 3 July 1991. A new reversible transform of the covariance matrix into backscatter intensities will be introduced in order to describe the full polarimetric target properties in a mathematically alternative way, allowing for the development of simple, versatile and robust classifiers. Moreover, it allows for polarimetric image segmentation using conventional approaches. The effect of azimuthally asymmetric backscatter behaviour on the classification results is discussed. Several models are proposed and results are compared with results from literature for the same test site. It can be concluded that the introduced classifiers perform very well, with levels of accuracy for this test site of 90.4% for C-band, 88.7% for L- band and 96.3% for the combination of C- and L-band.

  14. Engine-Scale Combustor Rig Designed, Fabricated, and Tested for Combustion Instability Control Research

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Breisacher, Kevin J.

    2000-01-01

    Low-emission combustor designs are prone to combustor instabilities. Because active control of these instabilities may allow future combustors to meet both stringent emissions and performance requirements, an experimental combustor rig was developed for investigating methods of actively suppressing combustion instabilities. The experimental rig has features similar to a real engine combustor and exhibits instabilities representative of those in aircraft gas turbine engines. Experimental testing in the spring of 1999 demonstrated that the rig can be tuned to closely represent an instability observed in engine tests. Future plans are to develop and demonstrate combustion instability control using this experimental combustor rig. The NASA Glenn Research Center at Lewis Field is leading the Combustion Instability Control program to investigate methods for actively suppressing combustion instabilities. Under this program, a single-nozzle, liquid-fueled research combustor rig was designed, fabricated, and tested. The rig has many of the complexities of a real engine combustor, including an actual fuel nozzle and swirler, dilution cooling, and an effusion-cooled liner. Prior to designing the experimental rig, a survey of aircraft engine combustion instability experience identified an instability observed in a prototype engine as a suitable candidate for replication. The frequency of the instability was 525 Hz, with an amplitude of approximately 1.5-psi peak-to-peak at a burner pressure of 200 psia. The single-nozzle experimental combustor rig was designed to preserve subcomponent lengths, cross sectional area distribution, flow distribution, pressure-drop distribution, temperature distribution, and other factors previously found to be determinants of burner acoustic frequencies, mode shapes, gain, and damping. Analytical models were used to predict the acoustic resonances of both the engine combustor and proposed experiment. The analysis confirmed that the test rig configuration and engine configuration had similar longitudinal acoustic characteristics, increasing the likelihood that the engine instability would be replicated in the rig. Parametric analytical studies were performed to understand the influence of geometry and condition variations and to establish a combustion test plan. Cold-flow experiments verified that the design values of area and flow distributions were obtained. Combustion test results established the existence of a longitudinal combustion instability in the 500-Hz range with a measured amplitude approximating that observed in the engine. Modifications to the rig configuration during testing also showed the potential for injector independence. The research combustor rig was developed in partnership with Pratt & Whitney of West Palm Beach, Florida, and United Technologies Research Center of East Hartford, Connecticut. Experimental testing of the combustor rig took place at United Technologies Research Center.

  15. Effects of powder characteristics on injection molding and burnout cracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandyopadhyay, G.; French, K.W.

    Silicon nitride particle size and size distributions were varied widely to determine their effects on burnout cracking of injection-molded test parts containing thick and thin sections. Elimination of internal cracking required significant burnout shrinkage, which did not occur by changes of particle size and size distribution. However, isopressing of test parts after burnout provided the dimensional shrinkage necessary for producing crack-free components.

  16. Assessment of the Tensile Properties for Single Fibers

    DTIC Science & Technology

    2018-02-01

    Approved for public release; distribution is unlimited. 14. ABSTRACT A novel experimental test method is presented to assess the tensile properties...distribution is unlimited. iii Contents List of Figures iv List of Tables v Acknowledgments vi 1. Introduction 1 2. Experimental Procedure 2 2.1 Test...fiber diameter measurements .............................. 7 Fig. 5 The coordinate system defining the experimental setup with the x- direction along

  17. Biology and Ecology of Sand Flies (Diptera: Psychodidae) in the Middle East, with Special Emphasis on Phlebotomus Papatasi and Phlebotomus Alexandri

    DTIC Science & Technology

    2009-03-06

    64 Predicted distribution of Phlebotomus papatasi in the Middle East 85 Jackknife test of training gain for...P. papatasi 86 Predicted distribution of Phlebotomus alexandri in the Middle East 87 Jackknife test of...epithelial cells by approximately 72 hours post ingestion (Sacks and Kamhawi 2001, Bates 2007). Approximately one week after ingestion, the parasites

  18. Oil Pharmacy at the Thermal Protection System Facility

    NASA Image and Video Library

    2017-08-08

    An overall view of the Oil Pharmacy operated under the Test and Operations Support Contract, or TOSC. The facility consolidated storage and distribution of petroleum products used in equipment maintained under the contract. This included standardized naming, testing processes and provided a central location for distribution of oils used in everything from simple machinery to the crawler-transporter and cranes in the Vehicle Assembly Building.

  19. Blue Whale Behavioral Response Study and Field Testing of the New Bioacoustic Probe

    DTIC Science & Technology

    2011-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Blue Whale Behavioral Response Study & Field Testing of...6849 email: jhildebrand@ucsd.edu Award Number: N000140811221 LONG-TERM GOALS Task 1: Blue Whales Behavioral Response Study The...behavioral response of large whales to commercial shipping and other low-frequency anthropogenic sound is not well understood. The PCAD model (NRC 2005

  20. Super Corr-A Solvent Replacement Study

    DTIC Science & Technology

    2011-05-12

    Horizontal Vertical BUSINESS SENSITIVE 23 Conclusions • No tested lubricants met all first article testing requirements • DuPont Vertrel SDG and Kyzen...John Stropki Battelle Distribution Statement A Approved for public release; distribution is unlimited Report Documentation Page Form ApprovedOMB No...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing

  1. FTIR Analyses of Hypervelocity Impact Deposits: DebriSat Tests

    DTIC Science & Technology

    2015-03-27

    Aerospace Concept Design Center advised on selection of materials for various subsystems. • Test chamber lined with “soft catch” foam panels to trap...C-0001 Authorized by: Space Systems Group Distribution Statement A: Approved for public release; distribution unlimited Report...Pre Preshot target was a multi-shock shield supplied by NASA designed to catch the projectile. It consisted of seven bumper panels consisting of

  2. Smoothing and Equating Methods Applied to Different Types of Test Score Distributions and Evaluated with Respect to Multiple Equating Criteria. Research Report. ETS RR-11-20

    ERIC Educational Resources Information Center

    Moses, Tim; Liu, Jinghua

    2011-01-01

    In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…

  3. REPTILES OF THE NEVADA TEST SITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanner, W.W.; Jorgensen, C.D.

    1963-10-01

    Results are reported from an ecological study of reptiles of the Nevada Test Site. Twenty-nine species of reptiles were found, including one tortoise, thirteen lizards, ard fifteen snakes. The effects of nuclear detonations were apparent on the distribution of reptiles near ground zero. The degree of disturbance decreased outward from ground zero. Food and suitable habitat were the main factors affecting the distribution of reptiles. (C.H.)

  4. Sensitivity of Ethiopian aquatic macroinvertebrates to the pesticides endosulfan and diazinon, compared to literature data.

    PubMed

    Teklu, Berhan M; Retta, Negussie; Van den Brink, Paul J

    2016-08-01

    The aims of the present study were to present a methodology for toxicity tests that can be used when analytical resources to verify the test concentrations are limited, and to evaluate whether the sensitivity of a limited number of Ethiopian species to pesticides differs from literature values for, mainly, temperate species. Acute toxicity tests were performed using three Ethiopian aquatic invertebrate species, one crustacean (Diaphanosoma brachyurum) and two insects (Anopheles pharoensis and Culex pipiens) and using the pesticides endosulfan and diazinon. All species-pesticide combinations were tested in duplicate to estimate the consistency, i.e. the intra-laboratory variation, in test results. Daphnia magna was tested as well to allow the test results to be compared directly with values from the literature. Results indicate that the differences between the EC50s obtained for D. magna in this study and those reported in the literature were less than a factor of 2. This indicates that the methodology used is able to provide credible toxicity values. The results of the duplicated tests showed intra-laboratory variation in EC50 values of up to a factor of 3, with one test showing a difference of a factor of 6 at 48 h. Comparison with available literature results for arthropod species using species sensitivity distributions indicated that the test results obtained in this study fit well in the log-normal distribution of the literature values. We conclude that the methodology of performing multiple tests to check for consistency of test results and performing tests with D. magna for comparison with literature values to check for accuracy is able to provide reliable effect threshold levels and that the tested Ethiopian species did not differ in sensitivity from the arthropod species reported on in the literature.

  5. Hawaiian Electric Advanced Inverter Grid Support Function Laboratory Validation and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Nagarajan, Adarsh; Prabakar, Kumar

    The objective for this test plan was to better understand how to utilize the performance capabilities of advanced inverter functions to allow the interconnection of distributed energy resource (DER) systems to support the new Customer Self-Supply, Customer Grid-Supply, and other future DER programs. The purpose of this project was: 1) to characterize how the tested grid supportive inverters performed the functions of interest, 2) to evaluate the grid supportive inverters in an environment that emulates the dynamics of O'ahu's electrical distribution system, and 3) to gain insight into the benefits of the grid support functions on selected O'ahu island distributionmore » feeders. These goals were achieved through laboratory testing of photovoltaic inverters, including power hardware-in-the-loop testing.« less

  6. Comparison of thermal analytic model with experimental test results for 30-sentimeter-diameter engineering model mercury ion thruster

    NASA Technical Reports Server (NTRS)

    Oglebay, J. C.

    1977-01-01

    A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.

  7. Analyses of layer-thickness effects in bilayered dental ceramics subjected to thermal stresses and ring-on-ring tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsueh, Chun-Hway; Thompson, G. A.; Jadaan, Osama M.

    Objectives. The purpose of this study was to analyze the stress distribution through the thickness of bilayered dental ceramics subjected to both thermal stresses and ring-on-ring tests and to systematically examine how the individual layer thickness influences this stress distribution and the failure origin. Methods. Ring-on-ring tests were performed on In-Ceram Alumina/Vitadur Alpha porcelain bilayered disks with porcelain in the tensile side, and In-Ceram Alumina to porcelain layer thickness ratios of 1:2, 1:1, and 2:1 were used to characterize the failure origins as either surface or interface. Based on the thermomechanical properties and thickness of each layer, the cooling temperaturemore » from glass transition temperature, and the ring-on-ring loading configuration, the stress distribution through the thickness of the bilayer was calculated using closed-form solutions. Finite element analyses were also performed to verify the analytical results. Results. The calculated stress distributions showed that the location of maximum tension during testing shifted from the porcelain surface to the In-Ceram Alumina/porcelain interface when the relative layer thickness ratio changed from 1:2 to 1:1 and to 2:1. This trend is in agreement with the experimental observations of the failure origins. Significance. For bilayered dental ceramics subjected to ring-on-ring tests, the location of maximum tension can shift from the surface to the interface depending upon the layer thickness ratio. The closed-form solutions for bilayers subjected to both thermal stresses and ring-on-ring tests are explicitly formulated which allow the biaxial strength of the bilayer to be evaluated.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dionne, B.; Tzanos, C. P.

    To support the safety analyses required for the conversion of the Belgian Reactor 2 (BR2) from highly-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, the simulation of a number of loss-of-flow tests, with or without loss of pressure, has been undertaken. These tests were performed at BR2 in 1963 and used instrumented fuel assemblies (FAs) with thermocouples (TC) imbedded in the cladding as well as probes to measure the FAs power on the basis of their coolant temperature rise. The availability of experimental data for these tests offers an opportunity to better establish the credibility of the RELAP5-3D model andmore » methodology used in the conversion analysis. In order to support the HEU to LEU conversion safety analyses of the BR2 reactor, RELAP simulations of a number of loss-of-flow/loss-of-pressure tests have been undertaken. Preliminary analyses showed that the conservative power distributions used historically in the BR2 RELAP model resulted in a significant overestimation of the peak cladding temperature during the transient. Therefore, it was concluded that better estimates of the steady-state and decay power distributions were needed to accurately predict the cladding temperatures measured during the tests and establish the credibility of the RELAP model and methodology. The new approach ('best estimate' methodology) uses the MCNP5, ORIGEN-2 and BERYL codes to obtain steady-state and decay power distributions for the BR2 core during the tests A/400/1, C/600/3 and F/400/1. This methodology can be easily extended to simulate any BR2 core configuration. Comparisons with measured peak cladding temperatures showed a much better agreement when power distributions obtained with the new methodology are used.« less

  9. Flight demonstration of aircraft fuselage and bulkhead monitoring using optical fiber distributed sensing system

    NASA Astrophysics Data System (ADS)

    Wada, Daichi; Igawa, Hirotaka; Tamayama, Masato; Kasai, Tokio; Arizono, Hitoshi; Murayama, Hideaki; Shiotsubo, Katsuya

    2018-02-01

    We have developed an optical fiber distributed sensing system based on optical frequency domain reflectometry (OFDR) that uses long-length fiber Bragg gratings (FBGs). This technique obtains strain data not as a point data from an FBG but as a distributed profile within the FBG. This system can measure the strain distribution profile with an adjustable high spatial resolution of the mm or sub-mm order in real-time. In this study, we applied this OFDR-FBG technique to a flying test bed that is a mid-sized jet passenger aircraft. We conducted flight tests and monitored the structural responses of a fuselage stringer and the bulkhead of the flying test bed during flights. The strain distribution variations were successfully monitored for various events including taxiing, takeoff, landing and several other maneuvers. The monitoring was effective not only for measuring the strain amplitude applied to the individual structural parts but also for understanding the characteristics of the structural responses in accordance with the flight maneuvers. We studied the correlations between various maneuvers and strains to explore the relationship between the operation and condition of aircraft.

  10. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    PubMed

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  11. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  12. Distribution of model-based multipoint heterogeneity lod scores.

    PubMed

    Xing, Chao; Morris, Nathan; Xing, Guan

    2010-12-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.

  13. Long-gauge FBGs interrogated by DTR3 for dynamic distributed strain measurement of helicopter blade model

    NASA Astrophysics Data System (ADS)

    Nishiyama, M.; Igawa, H.; Kasai, T.; Watanabe, N.

    2014-05-01

    In this paper, we describe characteristics of distributed strain sensing based on a Delayed Transmission/Reflection Ratiometric Reflectometry (DTR3) scheme with a long-gauge Fiber Bragg Grating (FBG), which is attractive to dynamic structural deformation monitoring such as a helicopter blade and an airplane wing. The DTR3 interrogator using the longgauge FBG has capability of detecting distributed strain with 50 cm spatial resolution in 100 Hz sampling rate. We evaluated distributed strain sensing characteristics of the long-gauge FBG attached on a 5.5 m helicopter blade model in static tests and free vibration dynamic tests.

  14. Development of Ada language control software for the NASA power management and distribution test bed

    NASA Technical Reports Server (NTRS)

    Wright, Ted; Mackin, Michael; Gantose, Dave

    1989-01-01

    The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described. The testbed is a reduced-scale prototype of the electric power system to be used on space station Freedom. It is designed to develop and test hardware and software for a 20-kHz power distribution system. The distributed, multiprocessor, testbed control system has an easy-to-use operator interface with an understandable English-text format. A simple interface for algorithm writers that uses the same commands as the operator interface is provided, encouraging interactive exploration of the system.

  15. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Research and Development of Energetic Ionic Liquids. Next Generation Energetic Materials Striking a Balance between Performance, Insensitivity, and Environmental Sustainability

    DTIC Science & Technology

    2011-12-01

    thermal and catalytic ignition flight qualified and flown (PRISMA) Distribution A: Public Release, Distribution unlimited. AF - M315E is US Air Force IL...for Space Propulsion, Noordwijk, The Netherlands, 20-22 June 2001. 6 Toxicity Assessment of AF - M315E Toxicity Testing Results PROPERTY AF - M315E ...Distribution Unlimited. 8 Much Effort Required in Small- Scale Safety/Hazard Evaluations Propellant AF - M315E * LMP-103S** Unconfined Burn Test 1 and 3: No

  17. Statistical Stationarity of Sediment Interbed Thicknesses in a Basalt Aquifer, Idaho National Laboratory, Eastern Snake River Plain, Idaho

    USGS Publications Warehouse

    Stroup, Caleb N.; Welhan, John A.; Davis, Linda C.

    2008-01-01

    The statistical stationarity of distributions of sedimentary interbed thicknesses within the southwestern part of the Idaho National Laboratory (INL) was evaluated within the stratigraphic framework of Quaternary sediments and basalts at the INL site, eastern Snake River Plain, Idaho. The thicknesses of 122 sedimentary interbeds observed in 11 coreholes were documented from lithologic logs and independently inferred from natural-gamma logs. Lithologic information was grouped into composite time-stratigraphic units based on correlations with existing composite-unit stratigraphy near these holes. The assignment of lithologic units to an existing chronostratigraphy on the basis of nearby composite stratigraphic units may introduce error where correlations with nearby holes are ambiguous or the distance between holes is great, but we consider this the best technique for grouping stratigraphic information in this geologic environment at this time. Nonparametric tests of similarity were used to evaluate temporal and spatial stationarity in the distributions of sediment thickness. The following statistical tests were applied to the data: (1) the Kolmogorov-Smirnov (K-S) two-sample test to compare distribution shape, (2) the Mann-Whitney (M-W) test for similarity of two medians, (3) the Kruskal-Wallis (K-W) test for similarity of multiple medians, and (4) Levene's (L) test for the similarity of two variances. Results of these analyses corroborate previous work that concluded the thickness distributions of Quaternary sedimentary interbeds are locally stationary in space and time. The data set used in this study was relatively small, so the results presented should be considered preliminary, pending incorporation of data from more coreholes. Statistical tests also demonstrated that natural-gamma logs consistently fail to detect interbeds less than about 2-3 ft thick, although these interbeds are observable in lithologic logs. This should be taken into consideration when modeling aquifer lithology or hydraulic properties based on lithology.

  18. Gravitational lenses and large scale structure

    NASA Technical Reports Server (NTRS)

    Turner, Edwin L.

    1987-01-01

    Four possible statistical tests of the large scale distribution of cosmic material are described. Each is based on gravitational lensing effects. The current observational status of these tests is also summarized.

  19. Stress distribution in composite flatwise tension test specimens

    NASA Technical Reports Server (NTRS)

    Scott, Curtis A.; Pereira, J. Michael

    1993-01-01

    A finite element analysis was conducted to determine the stress distribution in typical graphite/epoxy composite flat wise tension (FWT) specimens under normal loading conditions. The purpose of the analysis was to determine the relationship between the applied load and the stress in the sample to evaluate the validity of the test as a means of measuring the out-of-plane strength of a composite laminate. Three different test geometries and three different material lay ups were modeled. In all cases, the out-of-plane component of stress in the test section was found to be uniform, with no stress concentrations, and very close to the nominal applied stress. The stress in the sample was found to be three-dimensional, and the magnitude of in-plane normal and shear stresses varied with the anisotropy of the test specimen. However, in the cases considered here, these components of stress were much smaller than the out-of-plane normal stress. The geometry of the test specimen had little influence on the results. It was concluded that the flat wise tension test provides a good measure of the out-of-plane strength for the representative materials that were studied.

  20. Multipath interference test method using synthesized chirped signal from directly modulated DFB-LD with digital-signal-processing technique.

    PubMed

    Aida, Kazuo; Sugie, Toshihiko

    2011-12-12

    We propose a method of testing transmission fiber lines and distributed amplifiers. Multipath interference (MPI) is detected as a beat spectrum between a multipath signal and a direct signal using a synthesized chirped test signal with lightwave frequencies of f(1) and f(2) periodically emitted from a distributed feedback laser diode (DFB-LD). This chirped test pulse is generated using a directly modulated DFB-LD with a drive signal calculated using a digital signal processing technique (DSP). A receiver consisting of a photodiode and an electrical spectrum analyzer (ESA) detects a baseband power spectrum peak appearing at the frequency of the test signal frequency deviation (f(1)-f(2)) as a beat spectrum of self-heterodyne detection. Multipath interference is converted from the spectrum peak power. This method improved the minimum detectable MPI to as low as -78 dB. We discuss the detailed design and performance of the proposed test method, including a DFB-LD drive signal calculation algorithm with DSP for synthesis of the chirped test signal and experiments on single-mode fibers with discrete reflections. © 2011 Optical Society of America

Top