Science.gov

Sample records for nonparametric covariate adjustment

  1. Nonparametric Covariate-Adjusted Association Tests Based on the Generalized Kendall’s Tau*

    PubMed Central

    Zhu, Wensheng; Jiang, Yuan; Zhang, Heping

    2012-01-01

    Identifying the risk factors for comorbidity is important in psychiatric research. Empirically, studies have shown that testing multiple, correlated traits simultaneously is more powerful than testing a single trait at a time in association analysis. Furthermore, for complex diseases, especially mental illnesses and behavioral disorders, the traits are often recorded in different scales such as dichotomous, ordinal and quantitative. In the absence of covariates, nonparametric association tests have been developed for multiple complex traits to study comorbidity. However, genetic studies generally contain measurements of some covariates that may affect the relationship between the risk factors of major interest (such as genes) and the outcomes. While it is relatively easy to adjust these covariates in a parametric model for quantitative traits, it is challenging for multiple complex traits with possibly different scales. In this article, we propose a nonparametric test for multiple complex traits that can adjust for covariate effects. The test aims to achieve an optimal scheme of adjustment by using a maximum statistic calculated from multiple adjusted test statistics. We derive the asymptotic null distribution of the maximum test statistic, and also propose a resampling approach, both of which can be used to assess the significance of our test. Simulations are conducted to compare the type I error and power of the nonparametric adjusted test to the unadjusted test and other existing adjusted tests. The empirical results suggest that our proposed test increases the power through adjustment for covariates when there exist environmental effects, and is more robust to model misspecifications than some existing parametric adjusted tests. We further demonstrate the advantage of our test by analyzing a data set on genetics of alcoholism. PMID:22745516

  2. Conditional Covariance-Based Nonparametric Multidimensionality Assessment.

    ERIC Educational Resources Information Center

    Stout, William; And Others

    1996-01-01

    Three nonparametric procedures that use estimates of covariances of item-pair responses conditioned on examinee trait level for assessing dimensionality of a test are described. The HCA/CCPROX, DIMTEST, and DETECT are applied to a dimensionality study of the Law School Admission Test. (SLD)

  3. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    PubMed

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology. PMID:23687472

  4. A Review of Nonparametric Alternatives to Analysis of Covariance.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1985-01-01

    Five distribution-free alternatives to parametric analysis of covariance are presented and demonstrated: Quade's distribution-free test, Puri and Sen's solution, McSweeney and Porter's rank transformation, Burnett and Barr's rank difference scores, and Shirley's general linear model solution. The results of simulation studies regarding Type I…

  5. A Review of Nonparametric Alternatives to Analysis of Covariance.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    Five distribution-free alternatives to parametric analysis of covariance (ANCOVA) are presented and demonstrated using a specific data example. The procedures considered are those suggested by Quade (1967); Puri and Sen (1969); McSweeney and Porter (1971); Burnett and Barr (1978); and Shirley (1981). The results of simulation studies investigating…

  6. An Investigation into the Dimensionality of TOEFL Using Conditional Covariance-Based Nonparametric Approach

    ERIC Educational Resources Information Center

    Jang, Eunice Eunhee; Roussos, Louis

    2007-01-01

    This article reports two studies to illustrate methodologies for conducting a conditional covariance-based nonparametric dimensionality assessment using data from two forms of the Test of English as a Foreign Language (TOEFL). Study 1 illustrates how to assess overall dimensionality of the TOEFL including all three subtests. Study 2 is aimed at…

  7. Modelling covariates for the SF-6D standard gamble health state preference data using a nonparametric Bayesian method.

    PubMed

    Kharroubi, Samer; Brazier, John E; O'Hagan, Anthony

    2007-03-01

    It has long been recognised that respondent characteristics can impact on the values they give to health states. This paper reports on the findings from applying a non-parametric approach to estimate the covariates in a model of SF-6D health state values using Bayesian methods. The data set is the UK SF-6D valuation study, where a sample of 249 states defined by the SF-6D (a derivate of the SF-36) was valued by a sample of the UK general population using standard gamble. Advantages of the nonparametric model are that it can be used to predict scores in populations with different distributions of characteristics and that it allows for an impact to vary by health state (whilst ensuring that full health passes through unity). The results suggest an important age effect, with sex, class, education, employment and physical functioning probably having some effect, but the remaining covariates having no discernable effect. Adjusting for covariates in the UK sample made little difference to mean health state values. The paper discusses the implications of these results for policy. PMID:17157971

  8. The covariate-adjusted frequency plot.

    PubMed

    Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K

    2016-04-01

    Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter.

  9. A semiparametric approach for the nonparametric transformation survival model with multiple covariates.

    PubMed

    Song, Xiao; Ma, Shuangge; Huang, Jian; Zhou, Xiao-Hua

    2007-04-01

    The nonparametric transformation model makes no parametric assumptions on the forms of the transformation function and the error distribution. This model is appealing in its flexibility for modeling censored survival data. Current approaches for estimation of the regression parameters involve maximizing discontinuous objective functions, which are numerically infeasible to implement with multiple covariates. Based on the partial rank (PR) estimator (Khan and Tamer, 2004), we propose a smoothed PR estimator which maximizes a smooth approximation of the PR objective function. The estimator is shown to be asymptotically equivalent to the PR estimator but is much easier to compute when there are multiple covariates. We further propose using the weighted bootstrap, which is more stable than the usual sandwich technique with smoothing parameters, for estimating the standard error. The estimator is evaluated via simulation studies and illustrated with the Veterans Administration lung cancer data set.

  10. Covariate-adjusted response-adaptive designs for binary response.

    PubMed

    Rosenberger, W F; Vidyashankar, A N; Agarwal, D K

    2001-11-01

    An adaptive allocation design for phase III clinical trials that incorporates covariates is described. The allocation scheme maps the covariate-adjusted odds ratio from a logistic regression model onto [0, 1]. Simulations assume that both staggered entry and time to response are random and follow a known probability distribution that can depend on the treatment assigned, the patient's response, a covariate, or a time trend. Confidence intervals on the covariate-adjusted odds ratio is slightly anticonservative for the adaptive design under the null hypothesis, but power is similar to equal allocation under various alternatives for n = 200. For similar power, the net savings in terms of expected number of treatment failures is modest, but enough to make this design attractive for certain studies where known covariates are expected to be important and stratification is not desired, and treatment failures have a high ethical cost.

  11. On variance estimate for covariate adjustment by propensity score analysis.

    PubMed

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  12. Comparison of Rank Analysis of Covariance and Nonparametric Randomized Blocks Analysis.

    ERIC Educational Resources Information Center

    Porter, Andrew C.; McSweeney, Maryellen

    The relative power of three possible experimental designs under the condition that data is to be analyzed by nonparametric techniques; the comparison of the power of each nonparametric technique to its parametric analogue; and the comparison of relative powers using nonparametric and parametric techniques are discussed. The three nonparametric…

  13. Role of Experiment Covariance in Cross Section Adjustments

    SciTech Connect

    Giuseppe Palmiotti; M. Salvatores

    2014-06-01

    This paper is dedicated to the memory of R. D. McKnight, which gave a seminal contribution in establishing methodology and rigorous approach in the evaluation of the covariance of reactor physics integral experiments. His original assessment of the ZPPR experiment uncertainties and correlations has made nuclear data adjustments, based on these experiments, much more robust and reliable. In the present paper it has been shown with some numerical examples the actual impact on an adjustment of accounting for or neglecting such correlations.

  14. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  15. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  16. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  17. Development and Validation of a Brief Version of the Dyadic Adjustment Scale With a Nonparametric Item Analysis Model

    ERIC Educational Resources Information Center

    Sabourin, Stephane; Valois, Pierre; Lussier, Yvan

    2005-01-01

    The main purpose of the current research was to develop an abbreviated form of the Dyadic Adjustment Scale (DAS) with nonparametric item response theory. The authors conducted 5 studies, with a total participation of 8,256 married or cohabiting individuals. Results showed that the item characteristic curves behaved in a monotonically increasing…

  18. Covariate Adjustment Strategy Increases Power in the Randomized Controlled Trial With Discrete-Time Survival Endpoints

    ERIC Educational Resources Information Center

    Safarkhani, Maryam; Moerbeek, Mirjam

    2013-01-01

    In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…

  19. Validity of a Residualized Dependent Variable after Pretest Covariance Adjustments: Still the Same Variable?

    ERIC Educational Resources Information Center

    Nimon, Kim; Henson, Robin K.

    2015-01-01

    The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…

  20. Variable selection for covariate-adjusted semiparametric inference in randomized clinical trials

    PubMed Central

    Yuan, Shuai; Zhang, Hao Helen; Davidian, Marie

    2013-01-01

    Extensive baseline covariate information is routinely collected on participants in randomized clinical trials, and it is well-recognized that a proper covariate-adjusted analysis can improve the efficiency of inference on the treatment effect. However, such covariate adjustment has engendered considerable controversy, as post hoc selection of covariates may involve subjectivity and lead to biased inference, while prior specification of the adjustment may exclude important variables from consideration. Accordingly, how to select covariates objectively to gain maximal efficiency is of broad interest. We propose and study the use of modern variable selection methods for this purpose in the context of a semiparametric framework, under which variable selection in modeling the relationship between outcome and covariates is separated from estimation of the treatment effect, circumventing the potential for selection bias associated with standard analysis of covariance methods. We demonstrate that such objective variable selection techniques combined with this framework can identify key variables and lead to unbiased and efficient inference on the treatment effect. A critical issue in finite samples is validity of estimators of uncertainty, such as standard errors and confidence intervals for the treatment effect. We propose an approach to estimation of sampling variation of estimated treatment effect and show its superior performance relative to that of existing methods. PMID:22733628

  1. Adjusting O'Brien's Test to Control Type I Error for the Generalized Nonparametric Behrens–Fisher Problem

    PubMed Central

    Huang, Peng; Tilley, Barbara C.; Woolson, Robert F.; Lipsitz, Stuart

    2010-01-01

    Summary O'Brien (1984, Biometrics 40, 1079–1087) introduced a simple nonparametric test procedure for testing whether multiple outcomes in one treatment group have consistently larger values than outcomes in the other treatment group. We first explore the theoretical properties of O'Brien's test. We then extend it to the general nonparametric Behrens–Fisher hypothesis problem when no assumption is made regarding the shape of the distributions. We provide conditions when O'Brien's test controls its error probability asymptotically and when it fails. We also provide adjusted tests when the conditions do not hold. Throughout this article, we do not assume that all outcomes are continuous. Simulations are performed to compare the adjusted tests to O'Brien's test. The difference is also illustrated using data from a Parkinson's disease clinical trial. PMID:16011701

  2. Covariate adjustment increased power in randomized controlled trials: an example in traumatic brain injury

    PubMed Central

    Turner, Elizabeth L.; Perel, Pablo; Clayton, Tim; Edwards, Phil; Hernández, Adrian V.; Roberts, Ian; Shakur, Haleema; Steyerberg, Ewout W.

    2013-01-01

    Objective We aimed to determine to what extent covariate adjustment could affect power in a randomized controlled trial (RCT) of a heterogeneous population with traumatic brain injury (TBI). Study Design and Setting We analyzed 14-day mortality in 9497 participants in the Corticosteroid Randomisation After Significant Head Injury (CRASH) RCT of corticosteroid vs. placebo. Adjustment was made using logistic regression for baseline covariates of two validated risk models derived from external data (IMPACT) and from the CRASH data. The relative sample size (RESS) measure, defined as the ratio of the sample size required by an adjusted analysis to attain the same power as the unadjusted reference analysis, was used to assess the impact of adjustment. Results Corticosteroid was associated with higher mortality compared to placebo (OR=1.25, 95% CI: 1.13, 1.39). RESS of 0.79 and 0.73 were obtained by adjustment using the IMPACT and CRASH models, respectively, which for example implies an increase from 80% to 88% and 91% power, respectively. Conclusion Moderate gains in power may be obtained using covariate adjustment from logistic regression in heterogeneous conditions such as TBI. Although analyses of RCTs might consider covariate adjustment to improve power, we caution against this approach in the planning of RCTs. PMID:22169080

  3. ORACLE: an adjusted cross-section and covariance library for fast-reactor analysis

    SciTech Connect

    Yeivin, Y.; Marable, J.H.; Weisbin, C.R.; Wagschal, J.J.

    1980-01-01

    Benchmark integral-experiment values from six fast critical-reactor assemblies and two standard neutron fields are combined with corresponding calculations using group cross sections based on ENDF/B-V in a least-squares data adjustment using evaluated covariances from ENDF/B-V and supporting covariance evaluations. Purpose is to produce an adjusted cross-section and covariance library which is based on well-documented data and methods and which is suitable for fast-reactor design. By use of such a library, data- and methods-related biases of calculated performance parameters should be reduced and uncertainties of the calculated values minimized. Consistency of the extensive data base is analyzed using the chi-square test. This adjusted library ORACLE will be available shortly.

  4. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    SciTech Connect

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared in the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)

  5. Covariate adjustment of event histories estimated from Markov chains: the additive approach.

    PubMed

    Aalen, O O; Borgan, O; Fekjaer, H

    2001-12-01

    Markov chain models are frequently used for studying event histories that include transitions between several states. An empirical transition matrix for nonhomogeneous Markov chains has previously been developed, including a detailed statistical theory based on counting processes and martingales. In this article, we show how to estimate transition probabilities dependent on covariates. This technique may, e.g., be used for making estimates of individual prognosis in epidemiological or clinical studies. The covariates are included through nonparametric additive models on the transition intensities of the Markov chain. The additive model allows for estimation of covariate-dependent transition intensities, and again a detailed theory exists based on counting processes. The martingale setting now allows for a very natural combination of the empirical transition matrix and the additive model, resulting in estimates that can be expressed as stochastic integrals, and hence their properties are easily evaluated. Two medical examples will be given. In the first example, we study how the lung cancer mortality of uranium miners depends on smoking and radon exposure. In the second example, we study how the probability of being in response depends on patient group and prophylactic treatment for leukemia patients who have had a bone marrow transplantation. A program in R and S-PLUS that can carry out the analyses described here has been developed and is freely available on the Internet. PMID:11764270

  6. On the Importance of Reliable Covariate Measurement in Selection Bias Adjustments Using Propensity Scores

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Cook, Thomas D.; Shadish, William R.

    2011-01-01

    The effect of unreliability of measurement on propensity score (PS) adjusted treatment effects has not been previously studied. The authors report on a study simulating different degrees of unreliability in the multiple covariates that were used to estimate the PS. The simulation uses the same data as two prior studies. Shadish, Clark, and Steiner…

  7. A NONPARAMETRIC MULTIPLE IMPUTATION APPROACH FOR DATA WITH MISSING COVARIATE VALUES WITH APPLICATION TO COLORECTAL ADENOMA DATA

    PubMed Central

    Hsu, Chiu-Hsieh; Long, Qi; Li, Yisheng; Jacobs, Elizabeth

    2015-01-01

    A nearest neighbor-based multiple imputation approach is proposed to recover missing covariate information using the predictive covariates while estimating the association between the outcome and the covariates. To conduct the imputation, two working models are fitted to define an imputing set. This approach is expected to be robust to the underlying distribution of the data. We show in simulation and demonstrate on a colorectal data set that the proposed approach can improve efficiency and reduce bias in a situation with missing at random compared to the complete case analysis and the modified inverse probability weighted method. PMID:24697618

  8. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    NASA Astrophysics Data System (ADS)

    Kermarrec, Gaël; Schön, Steffen

    2016-09-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  9. Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: a principled yet flexible approach.

    PubMed

    Tsiatis, Anastasios A; Davidian, Marie; Zhang, Min; Lu, Xiaomin

    2008-10-15

    There is considerable debate regarding whether and how covariate-adjusted analyses should be used in the comparison of treatments in randomized clinical trials. Substantial baseline covariate information is routinely collected in such trials, and one goal of adjustment is to exploit covariates associated with outcome to increase precision of estimation of the treatment effect. However, concerns are routinely raised over the potential for bias when the covariates used are selected post hoc and the potential for adjustment based on a model of the relationship between outcome, covariates, and treatment to invite a 'fishing expedition' for that leading to the most dramatic effect estimate. By appealing to the theory of semiparametrics, we are led naturally to a characterization of all treatment effect estimators and to principled, practically feasible methods for covariate adjustment that yield the desired gains in efficiency and that allow covariate relationships to be identified and exploited while circumventing the usual concerns. The methods and strategies for their implementation in practice are presented. Simulation studies and an application to data from an HIV clinical trial demonstrate the performance of the techniques relative to the existing methods. PMID:17960577

  10. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  11. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  12. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  13. Estimating covariate-adjusted measures of diagnostic accuracy based on pooled biomarker assessments.

    PubMed

    McMahan, Christopher S; McLain, Alexander C; Gallagher, Colin M; Schisterman, Enrique F

    2016-07-01

    There is a need for epidemiological and medical researchers to identify new biomarkers (biological markers) that are useful in determining exposure levels and/or for the purposes of disease detection. Often this process is stunted by high testing costs associated with evaluating new biomarkers. Traditionally, biomarker assessments are individually tested within a target population. Pooling has been proposed to help alleviate the testing costs, where pools are formed by combining several individual specimens. Methods for using pooled biomarker assessments to estimate discriminatory ability have been developed. However, all these procedures have failed to acknowledge confounding factors. In this paper, we propose a regression methodology based on pooled biomarker measurements that allow the assessment of the discriminatory ability of a biomarker of interest. In particular, we develop covariate-adjusted estimators of the receiver-operating characteristic curve, the area under the curve, and Youden's index. We establish the asymptotic properties of these estimators and develop inferential techniques that allow one to assess whether a biomarker is a good discriminator between cases and controls, while controlling for confounders. The finite sample performance of the proposed methodology is illustrated through simulation. We apply our methods to analyze myocardial infarction (MI) data, with the goal of determining whether the pro-inflammatory cytokine interleukin-6 is a good predictor of MI after controlling for the subjects' cholesterol levels. PMID:26927583

  14. Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0

    NASA Astrophysics Data System (ADS)

    Yokoyama, K.; Ishikawa, M.

    2015-01-01

    The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on 239Pu capture cross section alterations. For this purpose three kind of indices, called "degree of mobility," "adjustment motive force," and "adjustment potential," are proposed.

  15. Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0

    SciTech Connect

    Yokoyama, K. Ishikawa, M.

    2015-01-15

    The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on {sup 239}Pu capture cross section alterations. For this purpose three kind of indices, called “degree of mobility,” “adjustment motive force,” and “adjustment potential,” are proposed.

  16. A-Priori and A-Posteriori Covariance Data in Nuclear Cross Section Adjustments: Issues and Challenges

    SciTech Connect

    Palmiotti, Giuseppe; Salvatores, Massimo; Aliberti, G.

    2015-01-01

    In order to provide useful feedback to evaluators a set of criteria are established for assessing the robustness and reliability of the cross section adjustments that make use of integral experiment information. Criteria are also provided for accepting the “a posteriori” cross sections, both as new “nominal” values and as “trends”. Some indications of the use of the “a posteriori” covariance matrix are indicated, even though more investigation is needed to settle this complex subject.

  17. Effects of Participation in a Post-Secondary Honors Program with Covariate Adjustment Using Propensity Score

    ERIC Educational Resources Information Center

    Furtwengler, Scott R.

    2015-01-01

    The present study sought to determine the extent to which participation in a post-secondary honors program affected academic achievement. Archival data were collected on three cohorts of high-achieving students at a large public university. Propensity scores were calculated on factors predicting participation in honors and used as the covariate.…

  18. A covariate adjusted two-stage allocation design for binary responses in randomized clinical trials.

    PubMed

    Bandyopadhyay, Uttam; Biswas, Atanu; Bhattacharya, Rahul

    2007-10-30

    In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.

  19. Adjusting for population shifts and covariates in space-time interaction tests.

    PubMed

    Schmertmann, Carl P

    2015-09-01

    Statistical tests for epidemic patterns use measures of space-time event clustering, and look for high levels of clustering that are unlikely to appear randomly if events are independent. Standard approaches, such as Knox's (1964, Applied Statistics 13, 25-29) test, are biased when the spatial distribution of population changes over time, or when there is space-time interaction in important background variables. In particular, the Knox test is too sensitive to coincidental event clusters in such circumstances, and too likely to raise false alarms. Kulldorff and Hjalmars (1999, Biometrics 55, 544-552) proposed a variant of Knox's test to control for bias caused by population shifts. In this article, I demonstrate that their test is also generally biased, in an unknown direction. I suggest an alternative approach that accounts for exposure shifts while also conditioning on the observed spatial and temporal margins of event counts, as in the original Knox test. The new approach uses Metropolis sampling of permutations, and is unbiased under more general conditions. I demonstrate how the new method can also include controls for the clustering effects of covariates.

  20. Nonparametric Combinatorial Sequence Models

    NASA Astrophysics Data System (ADS)

    Wauthier, Fabian L.; Jordan, Michael I.; Jojic, Nebojsa

    This work considers biological sequences that exhibit combinatorial structures in their composition: groups of positions of the aligned sequences are "linked" and covary as one unit across sequences. If multiple such groups exist, complex interactions can emerge between them. Sequences of this kind arise frequently in biology but methodologies for analyzing them are still being developed. This paper presents a nonparametric prior on sequences which allows combinatorial structures to emerge and which induces a posterior distribution over factorized sequence representations. We carry out experiments on three sequence datasets which indicate that combinatorial structures are indeed present and that combinatorial sequence models can more succinctly describe them than simpler mixture models. We conclude with an application to MHC binding prediction which highlights the utility of the posterior distribution induced by the prior. By integrating out the posterior our method compares favorably to leading binding predictors.

  1. GENERALIZED PARTIALLY LINEAR MIXED-EFFECTS MODELS INCORPORATING MISMEASURED COVARIATES

    PubMed Central

    Liang, Hua

    2009-01-01

    In this article we consider a semiparametric generalized mixed-effects model, and propose combining local linear regression, and penalized quasilikelihood and local quasilikelihood techniques to estimate both population and individual parameters and nonparametric curves. The proposed estimators take into account the local correlation structure of the longitudinal data. We establish normality for the estimators of the parameter and asymptotic expansion for the estimators of the nonparametric part. For practical implementation, we propose an appropriate algorithm. We also consider the measurement error problem in covariates in our model, and suggest a strategy for adjusting the effects of measurement errors. We apply the proposed models and methods to study the relation between virologic and immunologic responses in AIDS clinical trials, in which virologic response is classified into binary variables. A dataset from an AIDS clinical study is analyzed. PMID:20160899

  2. Nonparametric identification experiment

    NASA Technical Reports Server (NTRS)

    Yam, Yeung

    1988-01-01

    The following constitutes a summary of this paper: on-orbit identification methodology starts with nonparametric techniques for a priori system identification; development of the nonparametric identification and model determination experiment software has been completed; the validation experiments to be performed on the JPL Control and Identification Technology Validation Laboratory have been designed.

  3. CERAMIC: Case-Control Association Testing in Samples with Related Individuals, Based on Retrospective Mixed Model Analysis with Adjustment for Covariates

    PubMed Central

    Zhong, Sheng; McPeek, Mary Sara

    2016-01-01

    We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM) approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype), because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D) from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan. PMID:27695091

  4. A Bayesian nonparametric meta-analysis model.

    PubMed

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G

    2015-03-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall effect size, such models may be adequate, but for prediction, they surely are not if the effect-size distribution exhibits non-normal behavior. To address this issue, we propose a Bayesian nonparametric meta-analysis model, which can describe a wider range of effect-size distributions, including unimodal symmetric distributions, as well as skewed and more multimodal distributions. We demonstrate our model through the analysis of real meta-analytic data arising from behavioral-genetic research. We compare the predictive performance of the Bayesian nonparametric model against various conventional and more modern normal fixed-effects and random-effects models.

  5. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  6. Addiction Severity Index Recent and Lifetime Summary Indexes Based on Nonparametric Item Response Theory Methods

    ERIC Educational Resources Information Center

    Alterman, Arthur I.; Cacciola, John S.; Habing, Brian; Lynch, Kevin G.

    2007-01-01

    Baseline Addiction Severity Index (5th ed.; ASI-5) data of 2,142 substance abuse patients were analyzed with two nonparametric item response theory (NIRT) methods: Mokken scaling and conditional covariance techniques. Nine reliable and dimensionally homogeneous Recent Problem indexes emerged in the ASI-5's seven areas, including two each in the…

  7. Nonparametric Estimators for Incomplete Surveys

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2016-11-01

    Nonparametric estimators, such as the 1/{V}\\max estimator and the {C}- estimator, have been applied extensively to estimate luminosity functions (LFs) of astronomical sources from complete, truncated survey data sets. Application of such estimators to incomplete data sets typically requires further truncation of data, separation into subsets of constant completeness, and/or correction for incompleteness-induced bias. In this paper, we derive generalizations of the above estimators designed for use with incomplete, truncated data sets. We compare these generalized nonparametric estimators, investigate some of their simple statistical properties, and validate them using Monte Carlo simulation methods. We apply a nonparametric estimator to data obtained from the extended Baryon Oscillation Spectroscopic Survey to estimate the QSO LF for redshifts 0.68\\lt z\\lt 4.

  8. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Varying Coefficient Models.

    PubMed

    Fan, Jianqing; Ma, Yunbei; Dai, Wei

    2014-01-01

    The varying-coefficient model is an important class of nonparametric statistical model that allows us to examine how the effects of covariates vary with exposure variables. When the number of covariates is large, the issue of variable selection arises. In this paper, we propose and investigate marginal nonparametric screening methods to screen variables in sparse ultra-high dimensional varying-coefficient models. The proposed nonparametric independence screening (NIS) selects variables by ranking a measure of the nonparametric marginal contributions of each covariate given the exposure variable. The sure independent screening property is established under some mild technical conditions when the dimensionality is of nonpolynomial order, and the dimensionality reduction of NIS is quantified. To enhance the practical utility and finite sample performance, two data-driven iterative NIS methods are proposed for selecting thresholding parameters and variables: conditional permutation and greedy methods, resulting in Conditional-INIS and Greedy-INIS. The effectiveness and flexibility of the proposed methods are further illustrated by simulation studies and real data applications.

  9. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Varying Coefficient Models

    PubMed Central

    Fan, Jianqing; Ma, Yunbei; Dai, Wei

    2014-01-01

    The varying-coefficient model is an important class of nonparametric statistical model that allows us to examine how the effects of covariates vary with exposure variables. When the number of covariates is large, the issue of variable selection arises. In this paper, we propose and investigate marginal nonparametric screening methods to screen variables in sparse ultra-high dimensional varying-coefficient models. The proposed nonparametric independence screening (NIS) selects variables by ranking a measure of the nonparametric marginal contributions of each covariate given the exposure variable. The sure independent screening property is established under some mild technical conditions when the dimensionality is of nonpolynomial order, and the dimensionality reduction of NIS is quantified. To enhance the practical utility and finite sample performance, two data-driven iterative NIS methods are proposed for selecting thresholding parameters and variables: conditional permutation and greedy methods, resulting in Conditional-INIS and Greedy-INIS. The effectiveness and flexibility of the proposed methods are further illustrated by simulation studies and real data applications. PMID:25309009

  10. Bayesian non-parametrics and the probabilistic approach to modelling

    PubMed Central

    Ghahramani, Zoubin

    2013-01-01

    Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609

  11. Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables

    PubMed Central

    Song, Xiao; Wang, Ching-Yun

    2014-01-01

    In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed. PMID:25663724

  12. Multiatlas Segmentation as Nonparametric Regression

    PubMed Central

    Awate, Suyash P.; Whitaker, Ross T.

    2015-01-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

  13. A Survey of Non-Exchangeable Priors for Bayesian Nonparametric Models.

    PubMed

    Foti, Nicholas J; Williamson, Sinead A

    2015-02-01

    Dependent nonparametric processes extend distributions over measures, such as the Dirichlet process and the beta process, to give distributions over collections of measures, typically indexed by values in some covariate space. Such models are appropriate priors when exchangeability assumptions do not hold, and instead we want our model to vary fluidly with some set of covariates. Since the concept of dependent nonparametric processes was formalized by MacEachern, there have been a number of models proposed and used in the statistics and machine learning literatures. Many of these models exhibit underlying similarities, an understanding of which, we hope, will help in selecting an appropriate prior, developing new models, and leveraging inference techniques. PMID:26353247

  14. Bayesian inference for longitudinal data with non-parametric treatment effects

    PubMed Central

    Müller, Peter; Quintana, Fernando A.; Rosner, Gary L.; Maitland, Michael L.

    2014-01-01

    We consider inference for longitudinal data based on mixed-effects models with a non-parametric Bayesian prior on the treatment effect. The proposed non-parametric Bayesian prior is a random partition model with a regression on patient-specific covariates. The main feature and motivation for the proposed model is the use of covariates with a mix of different data formats and possibly high-order interactions in the regression. The regression is not explicitly parameterized. It is implied by the random clustering of subjects. The motivating application is a study of the effect of an anticancer drug on a patient's blood pressure. The study involves blood pressure measurements taken periodically over several 24-h periods for 54 patients. The 24-h periods for each patient include a pretreatment period and several occasions after the start of therapy. PMID:24285773

  15. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  16. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  17. Non-Parametric Collision Probability for Low-Velocity Encounters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2007-01-01

    An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.

  18. Nonparametric methods for molecular biology.

    PubMed

    Wittkowski, Knut M; Song, Tingting

    2010-01-01

    In 2003, the completion of the Human Genome Project (1) together with advances in computational resources (2) were expected to launch an era where the genetic and genomic contributions to many common diseases would be found. In the years following, however, researchers became increasingly frustrated as most reported 'findings' could not be replicated in independent studies (3). To improve the signal/noise ratio, it was suggested to increase the number of cases to be included to tens of thousands (4), a requirement that would dramatically restrict the scope of personalized medicine. Similarly, there was little success in elucidating the gene-gene interactions involved in complex diseases or even in developing criteria for assessing their phenotypes. As a partial solution to these enigmata, we here introduce a class of statistical methods as the 'missing link' between advances in genetics and informatics. As a first step, we provide a unifying view of a plethora of nonparametric tests developed mainly in the 1940s, all of which can be expressed as u-statistics. Then, we will extend this approach to reflect categorical and ordinal relationships between variables, resulting in a flexible and powerful approach to deal with the impact of (1) multiallelic genetic loci, (2) poly-locus genetic regions, and (3) oligo-genetic and oligo-genomic collaborative interactions on complex phenotypes.

  19. Nonparametric Methods in Molecular Biology

    PubMed Central

    Wittkowski, Knut M.; Song, Tingting

    2010-01-01

    In 2003, the completion of the Human Genome Project[1] together with advances in computational resources[2] were expected to launch an era where the genetic and genomic contributions to many common diseases would be found. In the years following, however, researchers became increasingly frustrated as most reported ‘findings’ could not be replicated in independent studies[3]. To improve the signal/noise ratio, it was suggested to increase the number of cases to be included to tens of thousands[4], a requirement that would dramatically restrict the scope of personalized medicine. Similarly, there was little success in elucidating the gene–gene interactions involved in complex diseases or even in developing criteria for assessing their phenotypes. As a partial solution to these enigmata, we here introduce a class of statistical methods as the ‘missing link’ between advances in genetics and informatics. As a first step, we provide a unifying view of a plethora of non-parametric tests developed mainly in the 1940s, all of which can be expressed as u-statistics. Then, we will extend this approach to reflect categorical and ordinal relationships between variables, resulting in a flexible and powerful approach to deal with the impact of (1) multi-allelic genetic loci, (2) poly-locus genetic regions, and (3) oligo-genetic and oligo-genomic collaborative interactions on complex phenotypes. PMID:20652502

  20. Inclusion of CYP3A5 genotyping in a nonparametric population model improves dosing of tacrolimus early after transplantation

    PubMed Central

    Åsberg, Anders; Midtvedt, Karsten; van Guilder, Mike; Størset, Elisabet; Bremer, Sara; Bergan, Stein; Jelliffe, Roger; Hartmann, Anders; Neely, Michael N

    2013-01-01

    Following organ engraftment, initial dosing of tacrolimus is based on recipient weight and adjusted by measured C0 concentrations. The bioavailability and elimination of tacrolimus are affected by the patients CYP3A5 genotype. Prospective data of the clinical advantage of knowing patient's CYP3A5 genotype prior to transplantation are lacking. A nonparametric population model was developed for tacrolimus in renal transplant recipients. Data from 99 patients were used for model development and validation. A three-compartment model with first-order absorption and lag time from the dosing compartment described the data well. Clearances and volumes of distribution were allometrically scaled to body size. The final model included fat-free mass, body mass index, hematocrit, time after transplantation, and CYP3A5 genotype as covariates. The bias and imprecision were 0.35 and 1.38, respectively, in the external data set. Patients with functional CYP3A5 had 26% higher clearance and 37% lower bioavailability. Knowledge of CYP3A5 genotype provided an initial advantage, but only until 3-4 tacrolimus concentrations were known. After this, a model without CYP3A5 genotype predicted just as well. The present models seem applicable for clinical individual dose predictions but need a prospective evaluation. PMID:24118301

  1. A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package

    ERIC Educational Resources Information Center

    Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.

    2013-01-01

    DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…

  2. An Empirical Investigation of Four Tests for Interaction in the Context of Factorial Analysis of Covariance.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Vineyard, George

    The Type I error and power properties of the parametric F test and three nonparametric competitors were compared in terms of 3 x 4 factorial analysis of covariance layout. The focus of the study was on the test for interaction either in the presence or absence of main effects. A variety of conditional distributions, sample sizes, levels of variate…

  3. Nonparametric survival analysis using Bayesian Additive Regression Trees (BART).

    PubMed

    Sparapani, Rodney A; Logan, Brent R; McCulloch, Robert E; Laud, Purushottam W

    2016-07-20

    Bayesian additive regression trees (BART) provide a framework for flexible nonparametric modeling of relationships of covariates to outcomes. Recently, BART models have been shown to provide excellent predictive performance, for both continuous and binary outcomes, and exceeding that of its competitors. Software is also readily available for such outcomes. In this article, we introduce modeling that extends the usefulness of BART in medical applications by addressing needs arising in survival analysis. Simulation studies of one-sample and two-sample scenarios, in comparison with long-standing traditional methods, establish face validity of the new approach. We then demonstrate the model's ability to accommodate data from complex regression models with a simulation study of a nonproportional hazards scenario with crossing survival functions and survival function estimation in a scenario where hazards are multiplicatively modified by a highly nonlinear function of the covariates. Using data from a recently published study of patients undergoing hematopoietic stem cell transplantation, we illustrate the use and some advantages of the proposed method in medical investigations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26854022

  4. Covariant mutually unbiased bases

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro

    2016-06-01

    The connection between maximal sets of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space and finite phase-space geometries is well known. In this article, we classify MUBs according to their degree of covariance with respect to the natural symmetries of a finite phase-space, which are the group of its affine symplectic transformations. We prove that there exist maximal sets of MUBs that are covariant with respect to the full group only in odd prime-power dimensional spaces, and in this case, their equivalence class is actually unique. Despite this limitation, we show that in dimension 2r covariance can still be achieved by restricting to proper subgroups of the symplectic group, that constitute the finite analogues of the oscillator group. For these subgroups, we explicitly construct the unitary operators yielding the covariance.

  5. Covariant Noncommutative Field Theory

    SciTech Connect

    Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.

    2008-07-02

    The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.

  6. Why preferring parametric forecasting to nonparametric methods?

    PubMed

    Jabot, Franck

    2015-05-01

    A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting.

  7. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  8. Incorporating covariates in skewed functional data models.

    PubMed

    Li, Meng; Staicu, Ana-Maria; Bondell, Howard D

    2015-07-01

    We introduce a class of covariate-adjusted skewed functional models (cSFM) designed for functional data exhibiting location-dependent marginal distributions. We propose a semi-parametric copula model for the pointwise marginal distributions, which are allowed to depend on covariates, and the functional dependence, which is assumed covariate invariant. The proposed cSFM framework provides a unifying platform for pointwise quantile estimation and trajectory prediction. We consider a computationally feasible procedure that handles densely as well as sparsely observed functional data. The methods are examined numerically using simulations and is applied to a new tractography study of multiple sclerosis. Furthermore, the methodology is implemented in the R package cSFM, which is publicly available on CRAN.

  9. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  10. Covariant Bardeen perturbation formalism

    NASA Astrophysics Data System (ADS)

    Vitenti, S. D. P.; Falciano, F. T.; Pinto-Neto, N.

    2014-05-01

    In a previous work we obtained a set of necessary conditions for the linear approximation in cosmology. Here we discuss the relations of this approach with the so-called covariant perturbations. It is often argued in the literature that one of the main advantages of the covariant approach to describe cosmological perturbations is that the Bardeen formalism is coordinate dependent. In this paper we will reformulate the Bardeen approach in a completely covariant manner. For that, we introduce the notion of pure and mixed tensors, which yields an adequate language to treat both perturbative approaches in a common framework. We then stress that in the referred covariant approach, one necessarily introduces an additional hypersurface choice to the problem. Using our mixed and pure tensors approach, we are able to construct a one-to-one map relating the usual gauge dependence of the Bardeen formalism with the hypersurface dependence inherent to the covariant approach. Finally, through the use of this map, we define full nonlinear tensors that at first order correspond to the three known gauge invariant variables Φ, Ψ and Ξ, which are simultaneously foliation and gauge invariant. We then stress that the use of the proposed mixed tensors allows one to construct simultaneously gauge and hypersurface invariant variables at any order.

  11. The covariant chiral ring

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Troost, Jan

    2016-03-01

    We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N = (4 , 4) supersymmetry in two dimensions. For seed target spaces K3 and T 4, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.

  12. Nonparametric Methods Instruction in Quantitative Geology.

    ERIC Educational Resources Information Center

    Kemmerly, Phillip Randall

    1990-01-01

    Presented is an approach to introducing upper division, undergraduate geology students to nonparametric statistics and their application to geologic data. Discussed are the use of the Mann-Whitney U and the Kolmogorov-Smirnov tests and a class assignment which illustrates their use. (CW)

  13. A Bayesian Nonparametric Approach to Test Equating

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  14. How Are Teachers Teaching? A Nonparametric Approach

    ERIC Educational Resources Information Center

    De Witte, Kristof; Van Klaveren, Chris

    2014-01-01

    This paper examines which configuration of teaching activities maximizes student performance. For this purpose a nonparametric efficiency model is formulated that accounts for (1) self-selection of students and teachers in better schools and (2) complementary teaching activities. The analysis distinguishes both individual teaching (i.e., a…

  15. Generalized Linear Covariance Analysis

    NASA Astrophysics Data System (ADS)

    Markley, F. Landis; Carpenter, J. Russell

    2009-01-01

    This paper presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into "solve-for" and "consider" parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  16. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  17. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  18. A Basic Computer Program for Calculating Simultaneous Pairwise Comparisons in Analysis of Covariance.

    ERIC Educational Resources Information Center

    Powers, Stephen; Jones, Patricia

    1986-01-01

    This paper describes a computer program which tests all pairwise comparisons of adjusted means in analysis of covariance by using Tukey-Kramer Test. The program contains: means of covariate, adjusted means of the criterion measure, sample size, mean square error, and the desired percentile point on the Studentized range distribution. (JAZ)

  19. An Evaluation of Parametric and Nonparametric Models of Fish Population Response.

    SciTech Connect

    Haas, Timothy C.; Peterson, James T.; Lee, Danny C.

    1999-11-01

    Predicting the distribution or status of animal populations at large scales often requires the use of broad-scale information describing landforms, climate, vegetation, etc. These data, however, often consist of mixtures of continuous and categorical covariates and nonmultiplicative interactions among covariates, complicating statistical analyses. Using data from the interior Columbia River Basin, USA, we compared four methods for predicting the distribution of seven salmonid taxa using landscape information. Subwatersheds (mean size, 7800 ha) were characterized using a set of 12 covariates describing physiography, vegetation, and current land-use. The techniques included generalized logit modeling, classification trees, a nearest neighbor technique, and a modular neural network. We evaluated model performance using out-of-sample prediction accuracy via leave-one-out cross-validation and introduce a computer-intensive Monte Carlo hypothesis testing approach for examining the statistical significance of landscape covariates with the non-parametric methods. We found the modular neural network and the nearest-neighbor techniques to be the most accurate, but were difficult to summarize in ways that provided ecological insight. The modular neural network also required the most extensive computer resources for model fitting and hypothesis testing. The generalized logit models were readily interpretable, but were the least accurate, possibly due to nonlinear relationships and nonmultiplicative interactions among covariates. Substantial overlap among the statistically significant (P<0.05) covariates for each method suggested that each is capable of detecting similar relationships between responses and covariates. Consequently, we believe that employing one or more methods may provide greater biological insight without sacrificing prediction accuracy.

  20. Unequal Covariate Group Means and the Analysis of Covariance.

    ERIC Educational Resources Information Center

    Hsu, Tse-Chi; Sebatane, E. Molapi

    1979-01-01

    A Monte Carlo technique was used to investigate the effect of the differences in covariate means among treatment groups on the significance level and the power of the F-test of the analysis of covariance. (Author/GDC)

  1. Using Analysis of Covariance (ANCOVA) with Fallible Covariates

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Aguinis, Herman

    2011-01-01

    Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…

  2. Modeling HUI 2 health state preference data using a nonparametric Bayesian method.

    PubMed

    Kharroubi, Samer A; McCabe, Christopher

    2008-01-01

    This article reports the application of a recently described approach to modeling health state valuation data and the impact of the respondent characteristics on health state valuations. The approach applies a nonparametric model to estimate a Bayesian Health Utilities Index Mark 2 (HUI 2) health state valuation algorithm. The data set is the UK HUI 2 valuation study where a sample of 51 states defined by the HUI 2 was valued by a sample of the UK general population using standard gamble. The article reports the application of the nonparametric model and compares it to the original model estimated using a conventional parametric random effects model. Advantages of the nonparametric model are that it can be used to predict scores in populations with different distributions of characteristics than observed in the survey sample and that it allows for the impact of respondent characteristics to vary by health state. The results suggest an important age effect with sex, having some effect, but the remaining covariates having no discernable effect. The article discusses the implications of these results for future applications of the HUI 2 and further work in this field. PMID:18971313

  3. Covariant deformed oscillator algebras

    NASA Technical Reports Server (NTRS)

    Quesne, Christiane

    1995-01-01

    The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.

  4. Adaptive Confidence Bands for Nonparametric Regression Functions

    PubMed Central

    Cai, T. Tony; Low, Mark; Ma, Zongming

    2014-01-01

    A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661

  5. A nonparametric software reliability growth model

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.; Sofer, Ariela

    1988-01-01

    Miller and Sofer have presented a nonparametric method for estimating the failure rate of a software program. The method is based on the complete monotonicity property of the failure rate function, and uses a regression approach to obtain estimates of the current software failure rate. This completely monotone software model is extended. It is shown how it can also provide long-range predictions of future reliability growth. Preliminary testing indicates that the method is competitive with parametric approaches, while being more robust.

  6. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  7. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  8. Covariant magnetic connection hypersurfaces

    NASA Astrophysics Data System (ADS)

    Pegoraro, F.

    2016-04-01

    > In the single fluid, non-relativistic, ideal magnetohydrodynamic (MHD) plasma description, magnetic field lines play a fundamental role by defining dynamically preserved `magnetic connections' between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D magnetic connection hypersurfaces in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when .

  9. Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials

    DOE Data Explorer

    The Low-fidelity Covariance Project (Low-Fi) was funded in FY07-08 by DOEÆs Nuclear Criticality Safety Program (NCSP). The project was a collaboration among ANL, BNL, LANL, and ORNL. The motivation for the Low-Fi project stemmed from an imbalance in supply and demand of covariance data. The interest in, and demand for, covariance data has been in a continual uptrend over the past few years. Requirements to understand application-dependent uncertainties in simulated quantities of interest have led to the development of sensitivity / uncertainty and data adjustment software such as TSUNAMI [1] at Oak Ridge. To take full advantage of the capabilities of TSUNAMI requires general availability of covariance data. However, the supply of covariance data has not been able to keep up with the demand. This fact is highlighted by the observation that the recent release of the much-heralded ENDF/B-VII.0 included covariance data for only 26 of the 393 neutron evaluations (which is, in fact, considerably less covariance data than was included in the final ENDF/B-VI release).[Copied from R.C. Little et al., "Low-Fidelity Covariance Project", Nuclear Data Sheets 109 (2008) 2828-2833] The Low-Fi covariance data are now available at the National Nuclear Data Center. They are separate from ENDF/B-VII.0 and the NNDC warns that this information is not approved by CSEWG. NNDC describes the contents of this collection as: "Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn

  10. Nonparametric, nonnegative deconvolution of large time series

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.

    2006-12-01

    There is a long tradition of characterizing hydrologic systems by linear models, in which the response of the system to a time-varying stimulus is computed by convolution of a system-specific transfer function with the input signal. Despite its limitations, the transfer-function concept has been shown valuable for many situations such as the precipitation/run-off relationships of catchments and solute transport in agricultural soils and aquifers. A practical difficulty lies in the identification of the transfer function. A common approach is to fit a parametric function, enforcing a particular shape of the transfer function, which may be in contradiction to the real behavior (e.g., multimodal transfer functions, long tails, etc.). In our nonparametric deconvolution, the transfer function is assumed an auto-correlated random time function, which is conditioned on the data by a Bayesian approach. Nonnegativity, which is a vital constraint for solute-transport applications, is enforced by the method of Lagrange multipliers. This makes the inverse problem nonlinear. In nonparametric deconvolution, identifying the auto-correlation parameters is crucial. Enforcing too much smoothness prohibits the identification of important features, whereas insufficient smoothing leads to physically meaningless transfer functions, mapping noise components in the two data series onto each other. We identify optimal smoothness parameters by the expectation-maximization method, which requires the repeated generation of many conditional realizations. The overall approach, however, is still significantly faster than Markov-Chain Monte-Carlo methods presented recently. We apply our approach to electric-conductivity time series measured in a river and monitoring wells in the adjacent aquifer. The data cover 1.5 years with a temporal resolution of 1h. The identified transfer functions have lengths of up to 60 days, making up 1440 parameters. We believe that nonparametric deconvolution is an

  11. Lottery spending: a non-parametric analysis.

    PubMed

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales.

  12. Bayesian Nonparametric Inference – Why and How

    PubMed Central

    Müller, Peter; Mitra, Riten

    2013-01-01

    We review inference under models with nonparametric Bayesian (BNP) priors. The discussion follows a set of examples for some common inference problems. The examples are chosen to highlight problems that are challenging for standard parametric inference. We discuss inference for density estimation, clustering, regression and for mixed effects models with random effects distributions. While we focus on arguing for the need for the flexibility of BNP models, we also review some of the more commonly used BNP models, thus hopefully answering a bit of both questions, why and how to use BNP. PMID:24368932

  13. A nonparametric and diversified portfolio model

    NASA Astrophysics Data System (ADS)

    Shirazi, Yasaman Izadparast; Sabiruzzaman, Md.; Hamzah, Nor Aishah

    2014-07-01

    Traditional portfolio models, like mean-variance (MV) suffer from estimation error and lack of diversity. Alternatives, like mean-entropy (ME) or mean-variance-entropy (MVE) portfolio models focus independently on the issue of either a proper risk measure or the diversity. In this paper, we propose an asset allocation model that compromise between risk of historical data and future uncertainty. In the new model, entropy is presented as a nonparametric risk measure as well as an index of diversity. Our empirical evaluation with a variety of performance measures shows that this model has better out-of-sample performances and lower portfolio turnover than its competitors.

  14. Lottery Spending: A Non-Parametric Analysis

    PubMed Central

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699

  15. Correcting eddy-covariance flux underestimates over a grassland.

    SciTech Connect

    Twine, T. E.; Kustas, W. P.; Norman, J. M.; Cook, D. R.; Houser, P. R.; Meyers, T. P.; Prueger, J. H.; Starks, P. J.; Wesely, M. L.; Environmental Research; Univ. of Wisconsin at Madison; DOE; National Aeronautics and Space Administration; National Oceanic and Atmospheric Administrationoratory

    2000-06-08

    Independent measurements of the major energy balance flux components are not often consistent with the principle of conservation of energy. This is referred to as a lack of closure of the surface energy balance. Most results in the literature have shown the sum of sensible and latent heat fluxes measured by eddy covariance to be less than the difference between net radiation and soil heat fluxes. This under-measurement of sensible and latent heat fluxes by eddy-covariance instruments has occurred in numerous field experiments and among many different manufacturers of instruments. Four eddy-covariance systems consisting of the same models of instruments were set up side-by-side during the Southern Great Plains 1997 Hydrology Experiment and all systems under-measured fluxes by similar amounts. One of these eddy-covariance systems was collocated with three other types of eddy-covariance systems at different sites; all of these systems under-measured the sensible and latent-heat fluxes. The net radiometers and soil heat flux plates used in conjunction with the eddy-covariance systems were calibrated independently and measurements of net radiation and soil heat flux showed little scatter for various sites. The 10% absolute uncertainty in available energy measurements was considerably smaller than the systematic closure problem in the surface energy budget, which varied from 10 to 30%. When available-energy measurement errors are known and modest, eddy-covariance measurements of sensible and latent heat fluxes should be adjusted for closure. Although the preferred method of energy balance closure is to maintain the Bowen-ratio, the method for obtaining closure appears to be less important than assuring that eddy-covariance measurements are consistent with conservation of energy. Based on numerous measurements over a sorghum canopy, carbon dioxide fluxes, which are measured by eddy covariance, are underestimated by the same factor as eddy covariance evaporation

  16. Covariant Lyapunov vectors

    NASA Astrophysics Data System (ADS)

    Ginelli, Francesco; Chaté, Hugues; Livi, Roberto; Politi, Antonio

    2013-06-01

    Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi-Pasta-Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’.

  17. Stardust Navigation Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Menon, Premkumar R.

    2000-01-01

    The Stardust spacecraft was launched on February 7, 1999 aboard a Boeing Delta-II rocket. Mission participants include the National Aeronautics and Space Administration (NASA), the Jet Propulsion Laboratory (JPL), Lockheed Martin Astronautics (LMA) and the University of Washington. The primary objective of the mission is to collect in-situ samples of the coma of comet Wild-2 and return those samples to the Earth for analysis. Mission design and operational navigation for Stardust is performed by the Jet Propulsion Laboratory (JPL). This paper will describe the extensive JPL effort in support of the Stardust pre-launch analysis of the orbit determination component of the mission covariance study. A description of the mission and it's trajectory will be provided first, followed by a discussion of the covariance procedure and models. Predicted accuracy's will be examined as they relate to navigation delivery requirements for specific critical events during the mission. Stardust was launched into a heliocentric trajectory in early 1999. It will perform an Earth Gravity Assist (EGA) on January 15, 2001 to acquire an orbit for the eventual rendezvous with comet Wild-2. The spacecraft will fly through the coma (atmosphere) on the dayside of Wild-2 on January 2, 2004. At that time samples will be obtained using an aerogel collector. After the comet encounter Stardust will return to Earth when the Sample Return Capsule (SRC) will separate and land at the Utah Test Site (UTTR) on January 15, 2006. The spacecraft will however be deflected off into a heliocentric orbit. The mission is divided into three phases for the covariance analysis. They are 1) Launch to EGA, 2) EGA to Wild-2 encounter and 3) Wild-2 encounter to Earth reentry. Orbit determination assumptions for each phase are provided. These include estimated and consider parameters and their associated a-priori uncertainties. Major perturbations to the trajectory include 19 deterministic and statistical maneuvers

  18. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    PubMed

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms. PMID:27046838

  19. Useful and little-known applications of the Least Square Method and some consequences of covariances

    NASA Astrophysics Data System (ADS)

    Helene, Otaviano; Mariano, Leandro; Guimarães-Filho, Zwinglio

    2016-10-01

    Covariances are as important as variances when dealing with experimental data and they must be considered in fitting procedures and adjustments in order to preserve the statistical properties of the adjusted quantities. In this paper, we apply the Least Square Method in matrix form to several simple problems in order to evaluate the consequences of covariances in the fitting procedure. Among the examples, we demonstrate how a measurement of a physical quantity can change the adopted value of all other covariant quantities and how a new single point (x , y) improves the parameters of a previously adjusted straight-line.

  20. A Bayesian Nonparametric Meta-Analysis Model

    ERIC Educational Resources Information Center

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  1. Enveloping Spectral Surfaces: Covariate Dependent Spectral Analysis of Categorical Time Series.

    PubMed

    Krafty, Robert T; Xiong, Shuangyan; Stoffer, David S; Buysse, Daniel J; Hall, Martica

    2012-09-01

    Motivated by problems in Sleep Medicine and Circadian Biology, we present a method for the analysis of cross-sectional categorical time series collected from multiple subjects where the effect of static continuous-valued covariates is of interest. Toward this goal, we extend the spectral envelope methodology for the frequency domain analysis of a single categorical process to cross-sectional categorical processes that are possibly covariate dependent. The analysis introduces an enveloping spectral surface for describing the association between the frequency domain properties of qualitative time series and covariates. The resulting surface offers an intuitively interpretable measure of association between covariates and a qualitative time series by finding the maximum possible conditional power at a given frequency from scalings of the qualitative time series conditional on the covariates. The optimal scalings that maximize the power provide scientific insight by identifying the aspects of the qualitative series which have the most pronounced periodic features at a given frequency conditional on the value of the covariates. To facilitate the assessment of the dependence of the enveloping spectral surface on the covariates, we include a theory for analyzing the partial derivatives of the surface. Our approach is entirely nonparametric, and we present estimation and asymptotics in the setting of local polynomial smoothing.

  2. Radiance Covariance and Climate Models

    NASA Technical Reports Server (NTRS)

    Haskins, R.; Goody, R.; Chen, L.

    1998-01-01

    Spectral Empirical Orhtogonal Functions (EOFs) derived from the covariance of satellite radiance spectra may be interpreted in terms of the vertical distribution of the covariance of temperature, water vapor, and clouds. The purpose of the investigation is to demonstrate the important constraints that resolved spectral radiances can place upon climate models.

  3. Covariant harmonic oscillators: 1973 revisited

    NASA Technical Reports Server (NTRS)

    Noz, M. E.

    1993-01-01

    Using the relativistic harmonic oscillator, a physical basis is given to the phenomenological wave function of Yukawa which is covariant and normalizable. It is shown that this wave function can be interpreted in terms of the unitary irreducible representations of the Poincare group. The transformation properties of these covariant wave functions are also demonstrated.

  4. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes. PMID:25939365

  5. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes.

  6. Nonparametric optimization of constrained total variation for tomography reconstruction.

    PubMed

    Liu, Li; Yin, Zhaofang; Ma, Xueyun

    2013-12-01

    Iterative algorithms based on constrained total-variation (TV) optimization are effective for the reconstruction of limited data from X-ray computed tomography (CT). Such algorithms can be executed by implementing alternative operations projection onto convex sets (POCS) on the constraints, and a gradient descent approach for TV objective minimization. To balance TV-gradient descent with POCS, the adaptive-steepest-descent (ASD) method utilizes a set of complicated parameters to adjust the TV-gradient-descent step-size. The optimal parameters are difficult for users to select, and moreover, users have to empirically choose different parameters when reconstructing different types of images. To deal with these drawbacks, this paper proposes a nonparametric method for constrained TV optimization. The method automatically updates the step-size of TV iteration according to the changes in the consistency term defined by the constraints without introducing artificial parameters. The proposed method avoids the time-consuming parameter optimization, and can be conveniently implemented in various applications. Experimental results on phantom data demonstrate the flexibility and effectiveness of the proposed method.

  7. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results. PMID:19440838

  8. Nonparametric k-nearest-neighbor entropy estimator.

    PubMed

    Lombardi, Damiano; Pant, Sanjay

    2016-01-01

    A nonparametric k-nearest-neighbor-based entropy estimator is proposed. It improves on the classical Kozachenko-Leonenko estimator by considering nonuniform probability densities in the region of k-nearest neighbors around each sample point. It aims to improve the classical estimators in three situations: first, when the dimensionality of the random variable is large; second, when near-functional relationships leading to high correlation between components of the random variable are present; and third, when the marginal variances of random variable components vary significantly with respect to each other. Heuristics on the error of the proposed and classical estimators are presented. Finally, the proposed estimator is tested for a variety of distributions in successively increasing dimensions and in the presence of a near-functional relationship. Its performance is compared with a classical estimator, and a significant improvement is demonstrated. PMID:26871193

  9. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.

  10. Nonparametric dark energy reconstruction from supernova data.

    PubMed

    Holsclaw, Tracy; Alam, Ujjaini; Sansó, Bruno; Lee, Herbert; Heitmann, Katrin; Habib, Salman; Higdon, David

    2010-12-10

    Understanding the origin of the accelerated expansion of the Universe poses one of the greatest challenges in physics today. Lacking a compelling fundamental theory to test, observational efforts are targeted at a better characterization of the underlying cause. If a new form of mass-energy, dark energy, is driving the acceleration, the redshift evolution of the equation of state parameter w(z) will hold essential clues as to its origin. To best exploit data from observations it is necessary to develop a robust and accurate reconstruction approach, with controlled errors, for w(z). We introduce a new, nonparametric method for solving the associated statistical inverse problem based on Gaussian process modeling and Markov chain Monte Carlo sampling. Applying this method to recent supernova measurements, we reconstruct the continuous history of w out to redshift z=1.5.

  11. A Nonparametric Bayesian Model for Nested Clustering.

    PubMed

    Lee, Juhee; Müller, Peter; Zhu, Yitan; Ji, Yuan

    2016-01-01

    We propose a nonparametric Bayesian model for clustering where clusters of experimental units are determined by a shared pattern of clustering another set of experimental units. The proposed model is motivated by the analysis of protein activation data, where we cluster proteins such that all proteins in one cluster give rise to the same clustering of patients. That is, we define clusters of proteins by the way that patients group with respect to the corresponding protein activations. This is in contrast to (almost) all currently available models that use shared parameters in the sampling model to define clusters. This includes in particular model based clustering, Dirichlet process mixtures, product partition models, and more. We show results for two typical biostatistical inference problems that give rise to clustering. PMID:26519174

  12. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  13. Nonparametric inference of network structure and dynamics

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among

  14. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.

  15. [Structural adjustment, cultural adjustment?].

    PubMed

    Dujardin, B; Dujardin, M; Hermans, I

    2003-12-01

    Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.

  16. The incredible shrinking covariance estimator

    NASA Astrophysics Data System (ADS)

    Theiler, James

    2012-05-01

    Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.

  17. A nonparametric approach for statistical comparison of results from alternative forced choice experiments

    NASA Astrophysics Data System (ADS)

    Noo, Frédéric; Wunderlich, Adam; Heuscher, Dominic; Schmitt, Katharina; Yu, Zhicong

    2013-03-01

    Task-based image quality assessment is a valuable methodology for development, optimization and evaluation of new image formation processes in x-ray computed tomography (CT), as well as in other imaging modalities. A simple way to perform such an assessment is through the use of two (or more) alternative forced choice (AFC) experiments. In this paper, we are interested in drawing statistical inference from outcomes of multiple AFC experiments that are obtained using multiple readers as well as multiple cases. We present a non-parametric covariance estimator for this problem. Then, we illustrate its usefulness with a practical example involving x-ray CT simulations. The task for this example is classification between presence or absence of one lesion with unknown location within a given object. This task is used for comparison of three standard image reconstruction algorithms in x-ray CT using four human observers.

  18. [Clinical research XIX. From clinical judgment to analysis of covariance].

    PubMed

    Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Moreno, Jorge; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2014-01-01

    The analysis of covariance (ANCOVA) is based on the general linear models. This technique involves a regression model, often multiple, in which the outcome is presented as a continuous variable, the independent variables are qualitative or are introduced into the model as dummy or dichotomous variables, and factors for which adjustment is required (covariates) can be in any measurement level (i.e. nominal, ordinal or continuous). The maneuvers can be entered into the model as 1) fixed effects, or 2) random effects. The difference between fixed effects and random effects depends on the type of information we want from the analysis of the effects. ANCOVA effect separates the independent variables from the effect of co-variables, i.e., corrects the dependent variable eliminating the influence of covariates, given that these variables change in conjunction with maneuvers or treatments, affecting the outcome variable. ANCOVA should be done only if it meets three assumptions: 1) the relationship between the covariate and the outcome is linear, 2) there is homogeneity of slopes, and 3) the covariate and the independent variable are independent from each other.

  19. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    PubMed

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline. PMID:24400941

  20. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    PubMed

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline.

  1. Covariation neglect among novice investors.

    PubMed

    Hedesström, Ted Martin; Svedsäter, Henrik; Gärling, Tommy

    2006-09-01

    In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns of individual assets. In Experiment 3, nearly half of those who seemingly attempted to minimize risk diversified even when this increased risk. These results indicate that novice investors neglect covariation when diversifying across investment alternatives. Experiment 4 established that naive diversification follows from motivation to minimize risk and showed that covariation neglect was not significantly reduced by informing participants about how covariation affects portfolio risk but was reduced by making participants systematically calculate aggregate returns for diversified portfolios. In order to counteract naive diversification, novice investors need to be better informed about the rationale underlying recommendations to diversify.

  2. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  3. Covariant Closed String Coherent States

    SciTech Connect

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops.

  4. Covariant closed string coherent states.

    PubMed

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops. PMID:21405564

  5. NONPARAMETRIC BAYESIAN ESTIMATION OF PERIODIC LIGHT CURVES

    SciTech Connect

    Wang Yuyang; Khardon, Roni; Protopapas, Pavlos

    2012-09-01

    Many astronomical phenomena exhibit patterns that have periodic behavior. An important step when analyzing data from such processes is the problem of identifying the period: estimating the period of a periodic function based on noisy observations made at irregularly spaced time points. This problem is still a difficult challenge despite extensive study in different disciplines. This paper makes several contributions toward solving this problem. First, we present a nonparametric Bayesian model for period finding, based on Gaussian Processes (GPs), that does not make assumptions on the shape of the periodic function. As our experiments demonstrate, the new model leads to significantly better results in period estimation especially when the light curve does not exhibit sinusoidal shape. Second, we develop a new algorithm for parameter optimization for GP which is useful when the likelihood function is very sensitive to the parameters with numerous local minima, as in the case of period estimation. The algorithm combines gradient optimization with grid search and incorporates several mechanisms to overcome the high computational complexity of GP. Third, we develop a novel approach for using domain knowledge, in the form of a probabilistic generative model, and incorporate it into the period estimation algorithm. Experimental results validate our approach showing significant improvement over existing methods.

  6. Nonparametric estimation of the rediscovery rate.

    PubMed

    Lee, Donghwan; Ganna, Andrea; Pawitan, Yudi; Lee, Woojoo

    2016-08-15

    Validation studies have been used to increase the reliability of the statistical conclusions for scientific discoveries; such studies improve the reproducibility of the findings and reduce the possibility of false positives. Here, one of the important roles of statistics is to quantify reproducibility rigorously. Two concepts were recently defined for this purpose: (i) rediscovery rate (RDR), which is the expected proportion of statistically significant findings in a study that can be replicated in the validation study and (ii) false discovery rate in the validation study (vFDR). In this paper, we aim to develop a nonparametric approach to estimate the RDR and vFDR and show an explicit link between the RDR and the FDR. Among other things, the link explains why reproducing statistically significant results even with low FDR level may be difficult. Two metabolomics datasets are considered to illustrate the application of the RDR and vFDR concepts in high-throughput data analysis. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Nonparametric methods in actigraphy: An update

    PubMed Central

    Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.

    2014-01-01

    Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921

  8. Non-parametric estimation of morphological lopsidedness

    NASA Astrophysics Data System (ADS)

    Giese, Nadine; van der Hulst, Thijs; Serra, Paolo; Oosterloo, Tom

    2016-09-01

    Asymmetries in the neutral hydrogen gas distribution and kinematics of galaxies are thought to be indicators for both gas accretion and gas removal processes. These are of fundamental importance for galaxy formation and evolution. Upcoming large blind H I surveys will provide tens of thousands of galaxies for a study of these asymmetries in a proper statistical way. Due to the large number of expected sources and the limited resolution of the majority of objects, detailed modelling is not feasible for most detections. We need fast, automatic and sensitive methods to classify these objects in an objective way. Existing non-parametric methods suffer from effects like the dependence on signal to noise, resolution and inclination. Here we show how to correctly take these effects into account and show ways to estimate the precision of the methods. We will use existing and modelled data to give an outlook on the performance expected for galaxies observed in the various sky surveys planned for e.g. WSRT/APERTIF and ASKAP.

  9. Bayesian nonparametric adaptive control using Gaussian processes.

    PubMed

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  10. Nonparametric identification of petrogenic and pyrogenic hydrocarbons in aquatic ecosystems.

    PubMed

    Carls, Mark G

    2006-07-01

    Novel nonparametric models developed herein discriminated between oiled and nonoiled or pyrogenic and oiled sources better than traditionally used diagnostic ratios and can outperform previously published oil identification models. These methods were compared using experimental and environmental hydrocarbon data (sediment, mussels, water, and fish) associated with the Exxon Valdez oil spill. Several nonparametric models were investigated, one designed to detect petroleum in general, one specific to Alaska North Slope crude oil (ANS), and one designed to detect pyrogenic PAH. These ideas are intended as guidance; nonparametric models can easily be adapted to fit the specific needs of a variety of petrogenic and pyrogenic sources. Oil identification was clearly difficult where composition was modified by physical or biological processes; model results differed most in these cases, suggesting that a multiple model approach to source discrimination may be useful where data interpretation is contentious. However, a combined nonparametric model best described a broad range of hydrocarbon sources, thus providing a useful new analytical assessment tool.

  11. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  12. Levy Matrices and Financial Covariances

    NASA Astrophysics Data System (ADS)

    Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail

    2003-10-01

    In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.

  13. Parametric and nonparametric analysis of life courses: an application to family formation patterns.

    PubMed

    Bonetti, Marco; Piccarreta, Raffaella; Salford, Gaia

    2013-06-01

    We discuss a unified approach to the description and explanation of life course patterns represented as sequences of states observed in discrete time. In particular, we study life course data collected as part of the Dutch Fertility and Family Surveys (FFS) to learn about the family formation behavior of 1,897 women born between 1953 and 1962. Retrospective monthly data were available on each 18- to 30-year-old woman living either with or without children as single, married, or cohabiting. We first study via a nonparametric approach which factors explain the pairwise dissimilarities observed between life courses. Permutation distribution inference allows for the study of the statistical significance of the effect of a set of covariates of interest. We then develop a parametric model for the sequence-generating process that can be used to describe state transitions and durations conditional on covariates and conditional on having observed an initial segment of the trajectory. Fitting of the proposed model and the corresponding model selection process are based on the observed data likelihood. We discuss the application of the methods to the FFS.

  14. Unveiling acoustic physics of the CMB using nonparametric estimation of the temperature angular power spectrum for Planck

    SciTech Connect

    Aghamousa, Amir; Shafieloo, Arman; Arjunwadkar, Mihir; Souradeep, Tarun E-mail: shafieloo@kasi.re.kr E-mail: tarun@iucaa.ernet.in

    2015-02-01

    Estimation of the angular power spectrum is one of the important steps in Cosmic Microwave Background (CMB) data analysis. Here, we present a nonparametric estimate of the temperature angular power spectrum for the Planck 2013 CMB data. The method implemented in this work is model-independent, and allows the data, rather than the model, to dictate the fit. Since one of the main targets of our analysis is to test the consistency of the ΛCDM model with Planck 2013 data, we use the nuisance parameters associated with the best-fit ΛCDM angular power spectrum to remove foreground contributions from the data at multipoles ℓ ≥50. We thus obtain a combined angular power spectrum data set together with the full covariance matrix, appropriately weighted over frequency channels. Our subsequent nonparametric analysis resolves six peaks (and five dips) up to ℓ ∼1850 in the temperature angular power spectrum. We present uncertainties in the peak/dip locations and heights at the 95% confidence level. We further show how these reflect the harmonicity of acoustic peaks, and can be used for acoustic scale estimation. Based on this nonparametric formalism, we found the best-fit ΛCDM model to be at 36% confidence distance from the center of the nonparametric confidence set—this is considerably larger than the confidence distance (9%) derived earlier from a similar analysis of the WMAP 7-year data. Another interesting result of our analysis is that at low multipoles, the Planck data do not suggest any upturn, contrary to the expectation based on the integrated Sachs-Wolfe contribution in the best-fit ΛCDM cosmology.

  15. Network reconstruction using nonparametric additive ODE models.

    PubMed

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  16. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  17. Density estimation with non-parametric methods

    NASA Astrophysics Data System (ADS)

    Fadda, D.; Slezak, E.; Bijaoui, A.

    1998-01-01

    One key issue in several astrophysical problems is the evaluation of the density probability function underlying an observational discrete data set. We here review two non-parametric density estimators which recently appeared in the astrophysical literature, namely the adaptive kernel density estimator and the Maximum Penalized Likelihood technique, and describe another method based on the wavelet transform. The efficiency of these estimators is tested by using extensive numerical simulations in the one-dimensional case. The results are in good agreement with theoretical functions and the three methods appear to yield consistent estimates. However, the Maximum Penalized Likelihood suffers from a lack of resolution and high computational cost due to its dependency on a minimization algorithm. The small differences between kernel and wavelet estimates are mainly explained by the ability of the wavelet method to take into account local gaps in the data distribution. This new approach is very promising, since smaller structures superimposed onto a larger one are detected only by this technique, especially when small samples are investigated. Thus, wavelet solutions appear to be better suited for subclustering studies. Nevertheless, kernel estimates seem more robust and are reliable solutions although some small-scale details can be missed. In order to check these estimators with respect to previous studies, two galaxy redshift samples, related to the galaxy cluster A3526 and to the Corona Borealis region, have been analyzed. In both these cases claims for bimodality are confirmed at a high confidence level. The complete version of this paper with the whole set of figures can be accessed from the electronic version of the A\\&A Suppl. Ser. managed by Editions de Physique as well as from the SISSA database (astro-ph/9704096).

  18. Covariance evaluation work at LANL

    SciTech Connect

    Kawano, Toshihiko; Talou, Patrick; Young, Phillip; Hale, Gerald; Chadwick, M B; Little, R C

    2008-01-01

    Los Alamos evaluates covariances for nuclear data library, mainly for actinides above the resonance regions and light elements in the enUre energy range. We also develop techniques to evaluate the covariance data, like Bayesian and least-squares fitting methods, which are important to explore the uncertainty information on different types of physical quantities such as elastic scattering angular distribution, or prompt neutron fission spectra. This paper summarizes our current activities of the covariance evaluation work at LANL, including the actinide and light element data mainly for the criticality safety study and transmutation technology. The Bayesian method based on the Kalman filter technique, which combines uncertainties in the theoretical model and experimental data, is discussed.

  19. Neutron Cross Section Covariances: Recent Workshop and Advanced Reactor Systems

    NASA Astrophysics Data System (ADS)

    Oblozinsky, Pavel

    2008-10-01

    The recent Workshop on Neutron Cross Section Covariances, organized by BNL and attended by more than 50 scientists, responded to demands of many user groups, including advanced reactor systems, for uncertainty and correlation information. These demands can be explained by considerable progress in advanced neutronics simulation that probe covariances and their impact on design and operational margins of nuclear systems. The Workshop addressed evaluation methodology, recent evaluations as well as user's perspective, marking era of revival of covariance development that started some two years ago. We illustrate urgent demand for covariances in the case of advanced reactor systems, including fast actinide burner under GNEP, new generation of power reactors, Gen-IV, and reactors under AFCI. A common feature of many of these systems is presence of large amount of minor actinides and fission products that require improved nuclear data. Advanced simulation codes rely on quality input, to be obtained by adjusting the data library, such as the new ENDF/B-VII.0, by considering integral experiments as currently pursued by GNEP. To this end the nuclear data community is developing covariances for formidable amount of 112 materials (isotopes).

  20. Posterior covariance versus analysis error covariance in variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shutyaev, Victor; Gejadze, Igor; Le Dimet, Francois-Xavier

    2013-04-01

    The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition function (analysis) [1]. The data contain errors (observation and background errors), hence there is an error in the analysis. For mildly nonlinear dynamics, the analysis error covariance can be approximated by the inverse Hessian of the cost functional in the auxiliary data assimilation problem [2], whereas for stronger nonlinearity - by the 'effective' inverse Hessian [3, 4]. However, it has been noticed that the analysis error covariance is not the posterior covariance from the Bayesian perspective. While these two are equivalent in the linear case, the difference may become significant in practical terms with the nonlinearity level rising. For the proper Bayesian posterior covariance a new approximation via the Hessian of the original cost functional is derived and its 'effective' counterpart is introduced. An approach for computing the mentioned estimates in the matrix-free environment using Lanczos method with preconditioning is suggested. Numerical examples which validate the developed theory are presented for the model governed by the Burgers equation with a nonlinear viscous term. The authors acknowledge the funding through the Natural Environment Research Council (NERC grant NE/J018201/1), the Russian Foundation for Basic Research (project 12-01-00322), the Ministry of Education and Science of Russia, the MOISE project (CNRS, INRIA, UJF, INPG) and Région Rhône-Alpes. References: 1. Le Dimet F.X., Talagrand O. Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus, 1986, v.38A, pp.97-110. 2. Gejadze I., Le Dimet F.-X., Shutyaev V. On analysis error covariances in variational data assimilation. SIAM J. Sci. Computing, 2008, v.30, no.4, pp.184-1874. 3. Gejadze I.Yu., Copeland G.J.M., Le Dimet F.-X., Shutyaev V. Computation of the analysis error

  1. Merging multiple longitudinal studies with study-specific missing covariates: A joint estimating function approach.

    PubMed

    Wang, Fei; Song, Peter X-K; Wang, Lu

    2015-12-01

    Merging multiple datasets collected from studies with identical or similar scientific objectives is often undertaken in practice to increase statistical power. This article concerns the development of an effective statistical method that enables to merge multiple longitudinal datasets subject to various heterogeneous characteristics, such as different follow-up schedules and study-specific missing covariates (e.g., covariates observed in some studies but missing in other studies). The presence of study-specific missing covariates presents great statistical methodology challenge in data merging and analysis. We propose a joint estimating function approach to addressing this challenge, in which a novel nonparametric estimating function constructed via splines-based sieve approximation is utilized to bridge estimating equations from studies with missing covariates to those with fully observed covariates. Under mild regularity conditions, we show that the proposed estimator is consistent and asymptotically normal. We evaluate finite-sample performances of the proposed method through simulation studies. In comparison to the conventional multiple imputation approach, our method exhibits smaller estimation bias. We provide an illustrative data analysis using longitudinal cohorts collected in Mexico City to assess the effect of lead exposures on children's somatic growth.

  2. A simulation-based marginal method for longitudinal data with dropout and mismeasured covariates.

    PubMed

    Yi, Grace Y

    2008-07-01

    Longitudinal data often contain missing observations and error-prone covariates. Extensive attention has been directed to analysis methods to adjust for the bias induced by missing observations. There is relatively little work on investigating the effects of covariate measurement error on estimation of the response parameters, especially on simultaneously accounting for the biases induced by both missing values and mismeasured covariates. It is not clear what the impact of ignoring measurement error is when analyzing longitudinal data with both missing observations and error-prone covariates. In this article, we study the effects of covariate measurement error on estimation of the response parameters for longitudinal studies. We develop an inference method that adjusts for the biases induced by measurement error as well as by missingness. The proposed method does not require the full specification of the distribution of the response vector but only requires modeling its mean and variance structures. Furthermore, the proposed method employs the so-called functional modeling strategy to handle the covariate process, with the distribution of covariates left unspecified. These features, plus the simplicity of implementation, make the proposed method very attractive. In this paper, we establish the asymptotic properties for the resulting estimators. With the proposed method, we conduct sensitivity analyses on a cohort data set arising from the Framingham Heart Study. Simulation studies are carried out to evaluate the impact of ignoring covariate measurement error and to assess the performance of the proposed method. PMID:18199691

  3. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  4. Are Eddy Covariance series stationary?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spectral analysis via a discrete Fourier transform is used often to examine eddy covariance series for cycles (eddies) of interest. Generally the analysis is performed on hourly or half-hourly data sets collected at 10 or 20 Hz. Each original series is often assumed to be stationary. Also automated ...

  5. Covariation Neglect among Novice Investors

    ERIC Educational Resources Information Center

    Hedesstrom, Ted Martin; Svedsater, Henrik; Garling, Tommy

    2006-01-01

    In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns…

  6. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library.

  7. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. PMID:26224325

  8. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  9. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  10. Graph embedded nonparametric mutual information for supervised dimensionality reduction.

    PubMed

    Bouzas, Dimitrios; Arvanitopoulos, Nikolaos; Tefas, Anastasios

    2015-05-01

    In this paper, we propose a novel algorithm for dimensionality reduction that uses as a criterion the mutual information (MI) between the transformed data and their corresponding class labels. The MI is a powerful criterion that can be used as a proxy to the Bayes error rate. Furthermore, recent quadratic nonparametric implementations of MI are computationally efficient and do not require any prior assumptions about the class densities. We show that the quadratic nonparametric MI can be formulated as a kernel objective in the graph embedding framework. Moreover, we propose its linear equivalent as a novel linear dimensionality reduction algorithm. The derived methods are compared against the state-of-the-art dimensionality reduction algorithms with various classifiers and on various benchmark and real-life datasets. The experimental results show that nonparametric MI as an optimization objective for dimensionality reduction gives comparable and in most of the cases better results compared with other dimensionality reduction methods. PMID:25881367

  11. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  12. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  13. Neutron Cross Section Covariances for Structural Materials and Fission Products

    SciTech Connect

    Hoblit, S.; Hoblit,S.; Cho,Y.-S.; Herman,M.; Mattoon,C.M.; Mughabghab,S.F.; Oblozinsky,P.; Pigni,M.T.; Sonzogni,A.A.

    2011-12-01

    We describe neutron cross section covariances for 78 structural materials and fission products produced for the new US evaluated nuclear reaction library ENDF/B-VII.1. Neutron incident energies cover full range from 10{sup -5} eV to 20 MeV and covariances are primarily provided for capture, elastic and inelastic scattering as well as (n,2n). The list of materials follows priorities defined by the Advanced Fuel Cycle Initiative, the major application being data adjustment for advanced fast reactor systems. Thus, in addition to 28 structural materials and 49 fission products, the list includes also {sup 23}Na which is important fast reactor coolant. Due to extensive amount of materials, we adopted a variety of methodologies depending on the priority of a specific material. In the resolved resonance region we primarily used resonance parameter uncertainties given in Atlas of Neutron Resonances and either applied the kernel approximation to propagate these uncertainties into cross section uncertainties or resorted to simplified estimates based on integral quantities. For several priority materials we adopted MF32 covariances produced by SAMMY at ORNL, modified by us by adding MF33 covariances to account for systematic uncertainties. In the fast neutron region we resorted to three methods. The most sophisticated was EMPIRE-KALMAN method which combines experimental data from EXFOR library with nuclear reaction modeling and least-squares fitting. The two other methods used simplified estimates, either based on the propagation of nuclear reaction model parameter uncertainties or on a dispersion analysis of central cross section values in recent evaluated data files. All covariances were subject to quality assurance procedures adopted recently by CSEWG. In addition, tools were developed to allow inspection of processed covariances and computed integral quantities, and for comparing these values to data from the Atlas and the astrophysics database KADoNiS.

  14. A Nonparametric Statistical Method That Improves Physician Cost of Care Analysis

    PubMed Central

    Metfessel, Brent A; Greene, Robert A

    2012-01-01

    Objective To develop a compositing method that demonstrates improved performance compared with commonly used tests for statistical analysis of physician cost of care data. Data Source Commercial preferred provider organization (PPO) claims data for internists from a large metropolitan area. Study Design We created a nonparametric composite performance metric that maintains risk adjustment using the Wilcoxon rank-sum (WRS) test. We compared the resulting algorithm to the parametric observed-to-expected ratio, with and without a statistical test, for stability of physician cost ratings among different outlier trimming methods and across two partially overlapping time periods. Principal Findings The WRS algorithm showed significantly greater within-physician stability among several typical outlier trimming and capping methods. The algorithm also showed significantly greater within-physician stability when the same physicians were analyzed across time periods. Conclusions The nonparametric algorithm described is a more robust and more stable methodology for evaluating physician cost of care than commonly used observed-to-expected ratio techniques. Use of such an algorithm can improve physician cost assessment for important current applications such as public reporting, pay for performance, and tiered benefit design. PMID:22524195

  15. Minimal unitary (covariant) scattering theory

    SciTech Connect

    Lindesay, J.V.; Markevich, A.

    1983-06-01

    In the minimal three particle equations developed by Lindesay the two body input amplitude was an on shell relativistic generalization of the non-relativistic scattering model characterized by a single mass parameter ..mu.. which in the two body (m + m) system looks like an s-channel bound state (..mu.. < 2m) or virtual state (..mu.. > 2m). Using this driving term in covariant Faddeev equations generates a rich covariant and unitary three particle dynamics. However, the simplest way of writing the relativisitic generalization of the Faddeev equations can take the on shell Mandelstam parameter s = 4(q/sup 2/ + m/sup 2/), in terms of which the two particle input is expressed, to negative values in the range of integration required by the dynamics. This problem was met in the original treatment by multiplying the two particle input amplitude by THETA(s). This paper provides what we hope to be a more direct way of meeting the problem.

  16. Understanding covariate shift in model performance

    PubMed Central

    McGaughey, Georgia; Walters, W. Patrick; Goldman, Brian

    2016-01-01

    Three (3) different methods (logistic regression, covariate shift and k-NN) were applied to five (5) internal datasets and one (1) external, publically available dataset where covariate shift existed. In all cases, k-NN’s performance was inferior to either logistic regression or covariate shift. Surprisingly, there was no obvious advantage for using covariate shift to reweight the training data in the examined datasets. PMID:27803797

  17. Covariant jump conditions in electromagnetism

    NASA Astrophysics Data System (ADS)

    Itin, Yakov

    2012-02-01

    A generally covariant four-dimensional representation of Maxwell's electrodynamics in a generic material medium can be achieved straightforwardly in the metric-free formulation of electromagnetism. In this setup, the electromagnetic phenomena are described by two tensor fields, which satisfy Maxwell's equations. A generic tensorial constitutive relation between these fields is an independent ingredient of the theory. By use of different constitutive relations (local and non-local, linear and non-linear, etc.), a wide area of applications can be covered. In the current paper, we present the jump conditions for the fields and for the energy-momentum tensor on an arbitrarily moving surface between two media. From the differential and integral Maxwell equations, we derive the covariant boundary conditions, which are independent of any metric and connection. These conditions include the covariantly defined surface current and are applicable to an arbitrarily moving smooth curved boundary surface. As an application of the presented jump formulas, we derive a Lorentzian type metric as a condition for existence of the wave front in isotropic media. This result holds for ordinary materials as well as for metamaterials with negative material constants.

  18. Covariate analysis of survival data: a small-sample study of Cox's model

    SciTech Connect

    Johnson, M.E.; Tolley, H.D.; Bryson, M.C.; Goldman, A.S.

    1982-09-01

    Cox's proportional-hazards model is frequently used to adjust for covariate effects in survival-data analysis. The small-sample performances of the maximum partial likelihood estimators of the regression parameters in a two-covariate hazard function model are evaluated with respect to bias, variance, and power in hypothesis tests. Previous Monte Carlo work on the two-sample problem is reviewed.

  19. Lorentz-covariant dissipative Lagrangian systems

    NASA Technical Reports Server (NTRS)

    Kaufman, A. N.

    1985-01-01

    The concept of dissipative Hamiltonian system is converted to Lorentz-covariant form, with evolution generated jointly by two scalar functionals, the Lagrangian action and the global entropy. A bracket formulation yields the local covariant laws of energy-momentum conservation and of entropy production. The formalism is illustrated by a derivation of the covariant Landau kinetic equation.

  20. Covariance control of discrete stochastic bilinear systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Kherat, S. M.; Yaz, E.

    1991-01-01

    The covariances that certain bilinear stochastic discrete time systems may possess are characterized. An explicit parameterization of all controllers that assign such covariances is given. The state feedback assignability and robustness of the system are discussed from a deterministic point of view. This work extends the theory of covariance control for continuous time bilinear systems to a discrete time setting.

  1. Relative error covariance analysis techniques and application

    NASA Technical Reports Server (NTRS)

    Wolff, Peter, J.; Williams, Bobby G.

    1988-01-01

    A technique for computing the error covariance of the difference between two estimators derived from different (possibly overlapping) data arcs is presented. The relative error covariance is useful for predicting the achievable consistency between Kalman-Bucy filtered estimates generated from two (not necessarily disjoint) data sets. The relative error covariance analysis technique is then applied to a Venus Orbiter simulation.

  2. Determination of Resonance Parameters and their Covariances from Neutron Induced Reaction Cross Section Data

    SciTech Connect

    Schillebeeckx, P.; Becker, B.; Danon, Y.; Guber, K.; Harada, H.; Heyse, J.; Junghans, A.R.; Kopecky, S.; Massimi, C.; Moxon, M.C.; Otuka, N.; Sirakov, I.; Volev, K.

    2012-12-15

    Cross section data in the resolved and unresolved resonance region are represented by nuclear reaction formalisms using parameters which are determined by fitting them to experimental data. Therefore, the quality of evaluated cross sections in the resonance region strongly depends on the experimental data used in the adjustment process and an assessment of the experimental covariance data is of primary importance in determining the accuracy of evaluated cross section data. In this contribution, uncertainty components of experimental observables resulting from total and reaction cross section experiments are quantified by identifying the metrological parameters involved in the measurement, data reduction and analysis process. In addition, different methods that can be applied to propagate the covariance of the experimental observables (i.e. transmission and reaction yields) to the covariance of the resonance parameters are discussed and compared. The methods being discussed are: conventional uncertainty propagation, Monte Carlo sampling and marginalization. It is demonstrated that the final covariance matrix of the resonance parameters not only strongly depends on the type of experimental observables used in the adjustment process, the experimental conditions and the characteristics of the resonance structure, but also on the method that is used to propagate the covariances. Finally, a special data reduction concept and format is presented, which offers the possibility to store the full covariance information of experimental data in the EXFOR library and provides the information required to perform a full covariance evaluation.

  3. Covariance matrices for use in criticality safety predictability studies

    SciTech Connect

    Derrien, H.; Larson, N.M.; Leal, L.C.

    1997-09-01

    Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.

  4. A Unifying Framework for Teaching Nonparametric Statistical Tests

    ERIC Educational Resources Information Center

    Bargagliotti, Anna E.; Orrison, Michael E.

    2014-01-01

    Increased importance is being placed on statistics at both the K-12 and undergraduate level. Research divulging effective methods to teach specific statistical concepts is still widely sought after. In this paper, we focus on best practices for teaching topics in nonparametric statistics at the undergraduate level. To motivate the work, we…

  5. Nonparametric Test of Discordance with Ordered Alternatives for "J" Groups.

    ERIC Educational Resources Information Center

    Beasley, T. Mark

    Robustness and power of parametric, semi-parametric, and nonparametric tests of between-group discordance were compared in this simulation study. The empirical Type I error rates and power of nine tests were compared. When data were sampled from the any differences especially favor young women in single-sex Catholic secondary schools, and whether…

  6. Three Classes of Nonparametric Differential Step Functioning Effect Estimators

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2008-01-01

    The examination of measurement invariance in polytomous items is complicated by the possibility that the magnitude and sign of lack of invariance may vary across the steps underlying the set of polytomous response options, a concept referred to as differential step functioning (DSF). This article describes three classes of nonparametric DSF effect…

  7. Estimation of Spatial Dynamic Nonparametric Durbin Models with Fixed Effects

    ERIC Educational Resources Information Center

    Qian, Minghui; Hu, Ridong; Chen, Jianwei

    2016-01-01

    Spatial panel data models have been widely studied and applied in both scientific and social science disciplines, especially in the analysis of spatial influence. In this paper, we consider the spatial dynamic nonparametric Durbin model (SDNDM) with fixed effects, which takes the nonlinear factors into account base on the spatial dynamic panel…

  8. Coefficients of Association Analogous to Pearson's r for Nonparametric Statistics.

    ERIC Educational Resources Information Center

    Stavig, Gordon; Acock, Alan C.

    1980-01-01

    Two r coefficients of association are discussed. One of the coefficients can be applied to any nonparametric test statistic (NTS) in which a normal approximation equation is appropriate. The other coefficient is applicable to any NTS in which exact probabilities are known. (Author/RL)

  9. Nonparametric identification of petrogenic and pyrogenic hydrocarbons in aquatic ecosystems.

    PubMed

    Carls, Mark G

    2006-07-01

    Novel nonparametric models developed herein discriminated between oiled and nonoiled or pyrogenic and oiled sources better than traditionally used diagnostic ratios and can outperform previously published oil identification models. These methods were compared using experimental and environmental hydrocarbon data (sediment, mussels, water, and fish) associated with the Exxon Valdez oil spill. Several nonparametric models were investigated, one designed to detect petroleum in general, one specific to Alaska North Slope crude oil (ANS), and one designed to detect pyrogenic PAH. These ideas are intended as guidance; nonparametric models can easily be adapted to fit the specific needs of a variety of petrogenic and pyrogenic sources. Oil identification was clearly difficult where composition was modified by physical or biological processes; model results differed most in these cases, suggesting that a multiple model approach to source discrimination may be useful where data interpretation is contentious. However, a combined nonparametric model best described a broad range of hydrocarbon sources, thus providing a useful new analytical assessment tool. PMID:16856740

  10. A Simulation Comparison of Parametric and Nonparametric Dimensionality Detection Procedures

    ERIC Educational Resources Information Center

    Mroch, Andrew A.; Bolt, Daniel M.

    2006-01-01

    Recently, nonparametric methods have been proposed that provide a dimensionally based description of test structure for tests with dichotomous items. Because such methods are based on different notions of dimensionality than are assumed when using a psychometric model, it remains unclear whether these procedures might lead to a different…

  11. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  12. Nonparametric Person-Fit Analysis of Polytomous Item Scores

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.

    2008-01-01

    Person-fit methods are used to uncover atypical test performance as reflected in the pattern of scores on individual items in a test. Unlike parametric person-fit statistics, nonparametric person-fit statistics do not require fitting a parametric test theory model. This study investigates the effectiveness of generalizations of nonparametric…

  13. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  14. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  15. Electromagnetics: from Covariance to Cloaking

    NASA Astrophysics Data System (ADS)

    McCall, M. W.

    2008-10-01

    An overview of some topical themes in electromagnetism is presented. Recent interest in metamaterials research has enabled earlier theoretical speculations concerning electromagnetic media displaying a negative refractive index to be experimentally realized. Such media can act as perfect lenses. The mathematical criterion of what signals such unusual electromagnetic behavior is discussed, showing that a covariant (or coordinate free) perspective is essential. Coordinate transformations have also become significant in the theme of transformation optics, where the interplay between a coordinate transformation and metamaterial behavior has led to the concept of an electromagnetic cloak.

  16. Phase-covariant quantum benchmarks

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Aspachs, M.; Muñoz-Tapia, R.; Bagan, E.

    2009-05-01

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  17. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  18. Shaft adjuster

    DOEpatents

    Harry, H.H.

    1988-03-11

    Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.

  19. Shaft adjuster

    DOEpatents

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  20. Bayesian nonparametric estimation of targeted agent effects on biomarker change to predict clinical outcome.

    PubMed

    Graziani, Rebecca; Guindani, Michele; Thall, Peter F

    2015-03-01

    The effect of a targeted agent on a cancer patient's clinical outcome putatively is mediated through the agent's effect on one or more early biological events. This is motivated by pre-clinical experiments with cells or animals that identify such events, represented by binary or quantitative biomarkers. When evaluating targeted agents in humans, central questions are whether the distribution of a targeted biomarker changes following treatment, the nature and magnitude of this change, and whether it is associated with clinical outcome. Major difficulties in estimating these effects are that a biomarker's distribution may be complex, vary substantially between patients, and have complicated relationships with clinical outcomes. We present a probabilistically coherent framework for modeling and estimation in this setting, including a hierarchical Bayesian nonparametric mixture model for biomarkers that we use to define a functional profile of pre-versus-post-treatment biomarker distribution change. The functional is similar to the receiver operating characteristic used in diagnostic testing. The hierarchical model yields clusters of individual patient biomarker profile functionals, and we use the profile as a covariate in a regression model for clinical outcome. The methodology is illustrated by analysis of a dataset from a clinical trial in prostate cancer using imatinib to target platelet-derived growth factor, with the clinical aim to improve progression-free survival time. PMID:25319212

  1. Nonparametric estimation receiver operating characteristic analysis for performance evaluation on combined detection and estimation tasks.

    PubMed

    Wunderlich, Adam; Goossens, Bart

    2014-10-01

    In an effort to generalize task-based assessment beyond traditional signal detection, there is a growing interest in performance evaluation for combined detection and estimation tasks, in which signal parameters, such as size, orientation, and contrast are unknown and must be estimated. One motivation for studying such tasks is their rich complexity, which offers potential advantages for imaging system optimization. To evaluate observer performance on combined detection and estimation tasks, Clarkson introduced the estimation receiver operating characteristic (EROC) curve and the area under the EROC curve as a summary figure of merit. This work provides practical tools for EROC analysis of experimental data. In particular, we propose nonparametric estimators for the EROC curve, the area under the EROC curve, and for the variance/covariance matrix of a vector of correlated EROC area estimates. In addition, we show that reliable confidence intervals can be obtained for EROC area, and we validate these intervals with Monte Carlo simulation. Application of our methodology is illustrated with an example comparing magnetic resonance imaging [Formula: see text]-space sampling trajectories. MATLAB® software implementing the EROC analysis estimators described in this work is publicly available at http://code.google.com/p/iqmodelo/. PMID:26158044

  2. Bayesian Nonparametric Estimation of Targeted Agent Effects on Biomarker Change to Predict Clinical Outcome

    PubMed Central

    Graziani, Rebecca; Guindani, Michele; Thall, Peter F.

    2015-01-01

    Summary The effect of a targeted agent on a cancer patient's clinical outcome putatively is mediated through the agent's effect on one or more early biological events. This is motivated by pre-clinical experiments with cells or animals that identify such events, represented by binary or quantitative biomarkers. When evaluating targeted agents in humans, central questions are whether the distribution of a targeted biomarker changes following treatment, the nature and magnitude of this change, and whether it is associated with clinical outcome. Major difficulties in estimating these effects are that a biomarker's distribution may be complex, vary substantially between patients, and have complicated relationships with clinical outcomes. We present a probabilistically coherent framework for modeling and estimation in this setting, including a hierarchical Bayesian nonparametric mixture model for biomarkers that we use to define a functional profile of pre-versus-post treatment biomarker distribution change. The functional is similar to the receiver operating characteristic used in diagnostic testing. The hierarchical model yields clusters of individual patient biomarker profile functionals, and we use the profile as a covariate in a regression model for clinical outcome. The methodology is illustrated by analysis of a dataset from a clinical trial in prostate cancer using imatinib to target platelet-derived growth factor, with the clinical aim to improve progression-free survival time. PMID:25319212

  3. Testing for associations with missing high-dimensional categorical covariates.

    PubMed

    Schumi, Jennifer; DiRienzo, A Gregory; DeGruttola, Victor

    2008-01-01

    Understanding how long-term clinical outcomes relate to short-term response to therapy is an important topic of research with a variety of applications. In HIV, early measures of viral RNA levels are known to be a strong prognostic indicator of future viral load response. However, mutations observed in the high-dimensional viral genotype at an early time point may change this prognosis. Unfortunately, some subjects may not have a viral genetic sequence measured at the early time point, and the sequence may be missing for reasons related to the outcome. Complete-case analyses of missing data are generally biased when the assumption that data are missing completely at random is not met, and methods incorporating multiple imputation may not be well-suited for the analysis of high-dimensional data. We propose a semiparametric multiple testing approach to the problem of identifying associations between potentially missing high-dimensional covariates and response. Following the recent exposition by Tsiatis, unbiased nonparametric summary statistics are constructed by inversely weighting the complete cases according to the conditional probability of being observed, given data that is observed for each subject. Resulting summary statistics will be unbiased under the assumption of missing at random. We illustrate our approach through an application to data from a recent AIDS clinical trial, and demonstrate finite sample properties with simulations. PMID:20231909

  4. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  5. A Bayesian Nonparametric Approach to Image Super-Resolution.

    PubMed

    Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid

    2015-02-01

    Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.

  6. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  7. Supervised nonparametric sparse discriminant analysis for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Wu, Longfei; Sun, Hao; Ji, Kefeng

    2016-03-01

    Owing to the high spectral sampling, the spectral information in hyperspectral imagery (HSI) is often highly correlated and contains redundancy. Motivated by the recent success of sparsity preserving based dimensionality reduction (DR) techniques in both computer vision and remote sensing image analysis community, a novel supervised nonparametric sparse discriminant analysis (NSDA) algorithm is presented for HSI classification. The objective function of NSDA aims at preserving the within-class sparse reconstructive relationship for within-class compactness characterization and maximizing the nonparametric between-class scatter simultaneously to enhance discriminative ability of the features in the projected space. Essentially, it seeks for the optimal projection matrix to identify the underlying discriminative manifold structure of a multiclass dataset. Experimental results on one visualization dataset and three recorded HSI dataset demonstrate NSDA outperforms several state-of-the-art feature extraction methods for HSI classification.

  8. A Nonparametric Approach for Mapping Quantitative Trait Loci

    PubMed Central

    Kruglyak, L.; Lander, E. S.

    1995-01-01

    Genetic mapping of quantitative trait loci (QTLs) is performed typically by using a parametric approach, based on the assumption that the phenotype follows a normal distribution. Many traits of interest, however, are not normally distributed. In this paper, we present a nonparametric approach to QTL mapping applicable to any phenotypic distribution. The method is based on a statistic Z(w), which generalizes the nonparametric Wilcoxon rank-sum test to the situation of whole-genome search by interval mapping. We determine the appropriate significance level for the statistic Z(w), by showing that its asymptotic null distribution follows an Ornstein-Uhlenbeck process. These results provide a robust, distribution-free method for mapping QTLs. PMID:7768449

  9. Nonparametric instrumental regression with non-convex constraints

    NASA Astrophysics Data System (ADS)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  10. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  11. Nonparametric estimation of Fisher information from real data

    NASA Astrophysics Data System (ADS)

    Har-Shemesh, Omri; Quax, Rick; Miñano, Borja; Hoekstra, Alfons G.; Sloot, Peter M. A.

    2016-02-01

    The Fisher information matrix (FIM) is a widely used measure for applications including statistical inference, information geometry, experiment design, and the study of criticality in biological systems. The FIM is defined for a parametric family of probability distributions and its estimation from data follows one of two paths: either the distribution is assumed to be known and the parameters are estimated from the data or the parameters are known and the distribution is estimated from the data. We consider the latter case which is applicable, for example, to experiments where the parameters are controlled by the experimenter and a complicated relation exists between the input parameters and the resulting distribution of the data. Since we assume that the distribution is unknown, we use a nonparametric density estimation on the data and then compute the FIM directly from that estimate using a finite-difference approximation to estimate the derivatives in its definition. The accuracy of the estimate depends on both the method of nonparametric estimation and the difference Δ θ between the densities used in the finite-difference formula. We develop an approach for choosing the optimal parameter difference Δ θ based on large deviations theory and compare two nonparametric density estimation methods, the Gaussian kernel density estimator and a novel density estimation using field theory method. We also compare these two methods to a recently published approach that circumvents the need for density estimation by estimating a nonparametric f divergence and using it to approximate the FIM. We use the Fisher information of the normal distribution to validate our method and as a more involved example we compute the temperature component of the FIM in the two-dimensional Ising model and show that it obeys the expected relation to the heat capacity and therefore peaks at the phase transition at the correct critical temperature.

  12. Parametric and nonparametric linkage analysis: A unified multipoint approach

    SciTech Connect

    Kruglyak, L.; Daly, M.J.; Reeve-Daly, M.P.; Lander, E.S.

    1996-06-01

    In complex disease studies, it is crucial to perform multipoint linkage analysis with many markers and to use robust nonparametric methods that take account of all pedigree information. Currently available methods fall short in both regards. In this paper, we describe how to extract complete multipoint inheritance information from general pedigrees of moderate size. This information is captured in the multipoint inheritance distribution, which provides a framework for a unified approach to both parametric and nonparametric methods of linkage analysis. Specifically, the approach includes the following: (1) Rapid exact computation of multipoint LOD scores involving dozens of highly polymorphic markers, even in the presence of loops and missing data. (2) Nonparametric linkage (NPL) analysis, a powerful new approach to pedigree analysis. We show that NPL is robust to uncertainty about mode of inheritance, is much more powerful than commonly used nonparametric methods, and loses little power relative to parametric linkage analysis. NPL thus appears to be the method of choice for pedigree studies of complex traits. (3) Information-content mapping, which measures the fraction of the total inheritance information extracted by the available marker data and points out the regions in which typing additional markers is most useful. (4) Maximum-likelihood reconstruction of many-marker haplotypes, even in pedigrees with missing data. We have implemented NPL analysis, LOD-score computation, information-content mapping, and haplotype reconstruction in a new computer package, GENEHUNTER. The package allows efficient multipoint analysis of pedigree data to be performed rapidly in a single user-friendly environment. 34 refs., 9 figs., 2 tabs.

  13. Nonparametric estimation of Fisher information from real data.

    PubMed

    Har-Shemesh, Omri; Quax, Rick; Miñano, Borja; Hoekstra, Alfons G; Sloot, Peter M A

    2016-02-01

    The Fisher information matrix (FIM) is a widely used measure for applications including statistical inference, information geometry, experiment design, and the study of criticality in biological systems. The FIM is defined for a parametric family of probability distributions and its estimation from data follows one of two paths: either the distribution is assumed to be known and the parameters are estimated from the data or the parameters are known and the distribution is estimated from the data. We consider the latter case which is applicable, for example, to experiments where the parameters are controlled by the experimenter and a complicated relation exists between the input parameters and the resulting distribution of the data. Since we assume that the distribution is unknown, we use a nonparametric density estimation on the data and then compute the FIM directly from that estimate using a finite-difference approximation to estimate the derivatives in its definition. The accuracy of the estimate depends on both the method of nonparametric estimation and the difference Δθ between the densities used in the finite-difference formula. We develop an approach for choosing the optimal parameter difference Δθ based on large deviations theory and compare two nonparametric density estimation methods, the Gaussian kernel density estimator and a novel density estimation using field theory method. We also compare these two methods to a recently published approach that circumvents the need for density estimation by estimating a nonparametric f divergence and using it to approximate the FIM. We use the Fisher information of the normal distribution to validate our method and as a more involved example we compute the temperature component of the FIM in the two-dimensional Ising model and show that it obeys the expected relation to the heat capacity and therefore peaks at the phase transition at the correct critical temperature. PMID:26986433

  14. Nonparametric inference and uniqueness for periodically observed progressive disease models.

    PubMed

    Griffin, Beth Ann; Lagakos, Stephen W

    2010-04-01

    In many studies examining the progression of HIV and other chronic diseases, subjects are periodically monitored to assess their progression through disease states. This gives rise to a specific type of panel data which have been termed "chain-of-events data"; e.g. data that result from periodic observation of a progressive disease process whose states occur in a prescribed order and where state transitions are not observable. Using a discrete time semi-Markov model, we develop an algorithm for nonparametric estimation of the distribution functions of sojourn times in a J state progressive disease model. Issues of uniqueness for chain-of-events data are not well-understood. Thus, a main goal of this paper is to determine the uniqueness of the nonparametric estimators of the distribution functions of sojourn times within states. We develop sufficient conditions for uniqueness of the nonparametric maximum likelihood estimator, including situations where some but not all of its components are unique. We illustrate the methods with three examples. PMID:19629683

  15. Nonparametric Analysis of Bivariate Gap Time with Competing Risks

    PubMed Central

    Huang, Chiung-Yu; Wang, Chenguang; Wang, Mei-Cheng

    2016-01-01

    Summary This article considers nonparametric methods for studying recurrent disease and death with competing risks. We first point out that comparisons based on the well-known cumulative incidence function can be confounded by different prevalence rates of the competing events, and that comparisons of the conditional distribution of the survival time given the failure event type are more relevant for investigating the prognosis of different patterns of recurrence disease. We then propose nonparametric estimators for the conditional cumulative incidence function as well as the conditional bivariate cumulative incidence function for the bivariate gap times, that is, the time to disease recurrence and the residual lifetime after recurrence. To quantify the association between the two gap times in the competing risks setting, a modified Kendall’s tau statistic is proposed. The proposed estimators for the conditional bivariate cumulative incidence distribution and the association measure account for the induced dependent censoring for the second gap time. Uniform consistency and weak convergence of the proposed estimators are established. Hypothesis testing procedures for two-sample comparisons are discussed. Numerical simulation studies with practical sample sizes are conducted to evaluate the performance of the proposed nonparametric estimators and tests. An application to data from a pancreatic cancer study is presented to illustrate the methods developed in this article. PMID:26990686

  16. Generalized covariance analysis for partially autonomous deep space missions

    NASA Technical Reports Server (NTRS)

    Boone, Jack N.

    1991-01-01

    A new covariance analysis method is presented that is suitable for the evaluation of multiple impulsive controllers acting on some stochastic process x. The method accommodates batch and sequential estimators with equal ease and accounts for time-delay effects in a natural manner. The formalism is developed in terms of a generalized state vector that is formed from the system state vector x, augmented by various fixed epoch estimates, and a data vector formed from discrete time observations of the system. Recursions are developed for time transition, measurement incorporation, and impulsive control updating of the generalized covariance matrix. Means of limiting the dimensional growth of the generalized state vector via the processes of estimator epoch adjustment and measurement vector deflation are described and the application of numerically stable matrix factorization methods to the generalized covariance recursions is outlined. The method is applied to the Magellan spacecraft to demonstrate the capability of ground-based optimal estimation and control of gyro/star scanner misalignment.

  17. A comparison of confounding adjustment methods with an application to early life determinants of childhood obesity

    PubMed Central

    Kleinman, Ken; Gillman, Matthew W.

    2014-01-01

    We implemented 6 confounding adjustment methods: 1) covariate-adjusted regression, 2) propensity score (PS) regression, 3) PS stratification, 4) PS matching with two calipers, 5) inverse-probability-weighting, and 6) doubly-robust estimation to examine the associations between the BMI z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding versus formula only (N = 437) and cesarean section versus vaginal delivery (N = 1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were −0.33 (95% CI −0.53, −0.13) and −0.24 (−0.46, −0.02), respectively. The other approaches resulted in smaller N (204 to 276) because of poor overlap of covariates, but CIs were of similar width except for inverse-probability-weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from −0.01 to −0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method. PMID:25171142

  18. Group Theory of Covariant Harmonic Oscillators

    ERIC Educational Resources Information Center

    Kim, Y. S.; Noz, Marilyn E.

    1978-01-01

    A simple and concrete example for illustrating the properties of noncompact groups is presented. The example is based on the covariant harmonic-oscillator formalism in which the relativistic wave functions carry a covariant-probability interpretation. This can be used in a group theory course for graduate students who have some background in…

  19. Quality Quantification of Evaluated Cross Section Covariances

    SciTech Connect

    Varet, S.; Dossantos-Uzarralde, P.

    2015-01-15

    Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.

  20. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  1. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Barat, Éric; Dautremer, Thomas

    2007-11-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution—normalized emission intensity of the spatial poisson process—is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on Rk (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM.

  2. Nonparametric joint shape learning for customized shape modeling

    PubMed Central

    Unal, Gozde

    2010-01-01

    We present a shape optimization approach to compute patient-specific models in customized prototyping applications. We design a coupled shape prior to model the transformation between a related pair of surfaces, using a nonparametric joint probability density estimation. The coupled shape prior forces with the help of application-specific data forces and smoothness forces drive a surface deformation towards a desired output surface. We demonstrate the usefulness of the method for generating customized shape models in applications of hearing aid design and pre-operative to intra-operative anatomic surface estimation. PMID:20044237

  3. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  4. To adjust or not to adjust for baseline when analyzing repeated binary responses? The case of complete data when treatment comparison at study end is of interest.

    PubMed

    Jiang, Honghua; Kulkarni, Pandurang M; Mallinckrodt, Craig H; Shurzinske, Linda; Molenberghs, Geert; Lipkovich, Ilya

    2015-01-01

    The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi-square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi-squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. PMID:25866149

  5. [On the importance of nonparametric approaches for a comparison of income inequality in different countries].

    PubMed

    Schubert, R

    1989-01-01

    "A comparison of income inequality in different countries can be made on the basis of parametric or nonparametric approaches. Using nonparametric instead of parametric approaches may raise the validity of such comparisons in some respects. However, new sources of validity reduction may be linked with nonparametric approaches. An important source of validity reduction is the influence of the type of data grouping on inequality measures. Therefore, nonparametric approaches are not strictly preferable to parametric approaches when comparing income inequality in different countries." (SUMMARY IN ENG)

  6. Assessment of water quality trends in the Minnesota River using non-parametric and parametric methods.

    PubMed

    Johnson, Heather O; Gupta, Satish C; Vecchia, Aldo V; Zvomuya, Francis

    2009-01-01

    Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO(3)-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO(3)-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region.

  7. Assessment of water quality trends in the Minnesota River using non-parametric and parametric methods

    USGS Publications Warehouse

    Johnson, H.O.; Gupta, S.C.; Vecchia, A.V.; Zvomuya, F.

    2009-01-01

    Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO3-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO3-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  8. Latent features in similarity judgments: a nonparametric bayesian approach.

    PubMed

    Navarro, Daniel J; Griffiths, Thomas L

    2008-11-01

    One of the central problems in cognitive science is determining the mental representations that underlie human inferences. Solutions to this problem often rely on the analysis of subjective similarity judgments, on the assumption that recognizing likenesses between people, objects, and events is crucial to everyday inference. One such solution is provided by the additive clustering model, which is widely used to infer the features of a set of stimuli from their similarities, on the assumption that similarity is a weighted linear function of common features. Existing approaches for implementing additive clustering often lack a complete framework for statistical inference, particularly with respect to choosing the number of features. To address these problems, this article develops a fully Bayesian formulation of the additive clustering model, using methods from nonparametric Bayesian statistics to allow the number of features to vary. We use this to explore several approaches to parameter estimation, showing that the nonparametric Bayesian approach provides a straightforward way to obtain estimates of both the number of features and their importance. PMID:18533818

  9. Latent variable models with nonparametric interaction effects of latent variables.

    PubMed

    Song, Xinyuan; Lu, Zhaohua; Feng, Xiangnan

    2014-05-10

    Renal disease is one of the common complications of diabetes, especially for Asian populations. Moreover, cardiovascular and renal diseases share common risk factors. This paper proposes a latent variable model with nonparametric interaction effects of latent variables for a study based on the Hong Kong Diabetes Registry, which was established in 1995 as part of a continuous quality improvement program at the Prince of Wales Hospital in Hong Kong. Renal outcome (outcome latent variable) is regressed in terms of cardiac function and diabetes (explanatory latent variables) through an additive structural equation formulated using a series of unspecified univariate and bivariate smooth functions. The Bayesian P-splines approach, along with a Markov chain Monte Carlo algorithm, is proposed to estimate smooth functions, unknown parameters, and latent variables in the model. The performance of the developed methodology is demonstrated via a simulation study. The effect of the nonparametric interaction of cardiac function and diabetes on renal outcome is investigated using the proposed methodology. PMID:24338916

  10. Nonparametric meta-analysis for diagnostic accuracy studies.

    PubMed

    Zapf, Antonia; Hoyer, Annika; Kramer, Katharina; Kuss, Oliver

    2015-12-20

    Summarizing the information of many studies using a meta-analysis becomes more and more important, also in the field of diagnostic studies. The special challenge in meta-analysis of diagnostic accuracy studies is that in general sensitivity and specificity are co-primary endpoints. Across the studies both endpoints are correlated, and this correlation has to be considered in the analysis. The standard approach for such a meta-analysis is the bivariate logistic random effects model. An alternative approach is to use marginal beta-binomial distributions for the true positives and the true negatives, linked by copula distributions. In this article, we propose a new, nonparametric approach of analysis, which has greater flexibility with respect to the correlation structure, and always converges. In a simulation study, it becomes apparent that the empirical coverage of all three approaches is in general below the nominal level. Regarding bias, empirical coverage, and mean squared error the nonparametric model is often superior to the standard model, and comparable with the copula model. The three approaches are also applied to two example meta-analyses. PMID:26174020

  11. Classification via Bayesian Nonparametric Learning of Affine Subspaces.

    PubMed

    Page, Garritt; Bhattacharya, Abhishek; Dunson, David

    2013-03-15

    It has become common for data sets to contain large numbers of variables in studies conducted in areas such as genetics, machine vision, image analysis and many others. When analyzing such data, parametric models are often too inflexible while nonparametric procedures tend to be non-robust because of insufficient data on these high dimensional spaces. This is particularly true when interest lies in building efficient classifiers in the presence of many predictor variables. When dealing with these types of data, it is often the case that most of the variability tends to lie along a few directions, or more generally along a much smaller dimensional submanifold of the data space. In this article, we propose a class of models that flexibly learn about this submanifold while simultaneously performing dimension reduction in classification. This methodology, allows the cell probabilities to vary nonparametrically based on a few coordinates expressed as linear combinations of the predictors. Also, as opposed to many black-box methods for dimensionality reduction, the proposed model is appealing in having clearly interpretable and identifiable parameters which provide insight into which predictors are important in determining accurate classification boundaries. Gibbs sampling methods are developed for posterior computation, and the methods are illustrated using simulated and real data applications.

  12. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  13. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    PubMed

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  14. Nonparametric Analyses of Log-Periodic Precursors to Financial Crashes

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    We apply two nonparametric methods to further test the hypothesis that log-periodicity characterizes the detrended price trajectory of large financial indices prior to financial crashes or strong corrections. The term "parametric" refers here to the use of the log-periodic power law formula to fit the data; in contrast, "nonparametric" refers to the use of general tools such as Fourier transform, and in the present case the Hilbert transform and the so-called (H, q)-analysis. The analysis using the (H, q)-derivative is applied to seven time series ending with the October 1987 crash, the October 1997 correction and the April 2000 crash of the Dow Jones Industrial Average (DJIA), the Standard & Poor 500 and Nasdaq indices. The Hilbert transform is applied to two detrended price time series in terms of the ln(tc-t) variable, where tc is the time of the crash. Taking all results together, we find strong evidence for a universal fundamental log-frequency f=1.02±0.05 corresponding to the scaling ratio λ=2.67±0.12. These values are in very good agreement with those obtained in earlier works with different parametric techniques. This note is extracted from a long unpublished report with 58 figures available at , which extensively describes the evidence we have accumulated on these seven time series, in particular by presenting all relevant details so that the reader can judge for himself or herself the validity and robustness of the results.

  15. Covariance Spectroscopy for Fissile Material Detection

    SciTech Connect

    Rusty Trainham, Jim Tinsley, Paul Hurley, Ray Keegan

    2009-06-02

    Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem.

  16. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error 1

    PubMed Central

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2011-01-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

  17. Phase-covariant quantum cloning of qudits

    SciTech Connect

    Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin

    2003-02-01

    We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation.

  18. Covariate analysis of bivariate survival data

    SciTech Connect

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  19. Covariant action for type IIB supergravity

    NASA Astrophysics Data System (ADS)

    Sen, Ashoke

    2016-07-01

    Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.

  20. Noncommutative Gauge Theory with Covariant Star Product

    SciTech Connect

    Zet, G.

    2010-08-04

    We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.

  1. Lorentz covariance of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Rovelli, Carlo; Speziale, Simone

    2011-05-01

    The kinematics of loop gravity can be given a manifestly Lorentz-covariant formulation: the conventional SU(2)-spin-network Hilbert space can be mapped to a space K of SL(2,C) functions, where Lorentz covariance is manifest. K can be described in terms of a certain subset of the projected spin networks studied by Livine, Alexandrov and Dupuis. It is formed by SL(2,C) functions completely determined by their restriction on SU(2). These are square-integrable in the SU(2) scalar product, but not in the SL(2,C) one. Thus, SU(2)-spin-network states can be represented by Lorentz-covariant SL(2,C) functions, as two-component photons can be described in the Lorentz-covariant Gupta-Bleuler formalism. As shown by Wolfgang Wieland in a related paper, this manifestly Lorentz-covariant formulation can also be directly obtained from canonical quantization. We show that the spinfoam dynamics of loop quantum gravity is locally SL(2,C)-invariant in the bulk, and yields states that are precisely in K on the boundary. This clarifies how the SL(2,C) spinfoam formalism yields an SU(2) theory on the boundary. These structures define a tidy Lorentz-covariant formalism for loop gravity.

  2. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  3. Analyzing Psychopathology Items: A Case for Nonparametric Item Response Theory Modeling

    ERIC Educational Resources Information Center

    Meijer, Rob R.; Baneke, Joost J.

    2004-01-01

    The authors discuss the applicability of nonparametric item response theory (IRT) models to the construction and psychometric analysis of personality and psychopathology scales, and they contrast these models with parametric IRT models. They describe the fit of nonparametric IRT to the Depression content scale of the Minnesota Multiphasic…

  4. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  5. Non-parametric three-way mixed ANOVA with aligned rank tests.

    PubMed

    Oliver-Rodríguez, Juan C; Wang, X T

    2015-02-01

    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods. PMID:24303958

  6. Non-parametric three-way mixed ANOVA with aligned rank tests.

    PubMed

    Oliver-Rodríguez, Juan C; Wang, X T

    2015-02-01

    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods.

  7. A New Test Rig for Accurate Nonparametric Measurement and Characterization of Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Montecucco, Andrea; Buckle, James; Siviter, Jonathan; Knox, Andrew R.

    2013-07-01

    Thermoelectric generators (TEGs) are increasingly employed in large-scale applications, therefore accurate performance data are necessary to permit precise designs and simulations. However, there is still no standardized method to test the electrical and thermal performance of TEGs. This paper presents an innovative test system to assess device performance in the "real world." The fixture allows the hot temperature to be increased up to 800°C with minimal thermal losses and thermal shock; the clamping load can be adjusted up to 5 kN, and the temperatures are sensed by thermocouples placed directly on the TEG's surfaces. A computer program controls all the instruments in order to minimize errors and to aid accurate measurement and test repeatability. The test rig can measure four TEGs simultaneously, each one individually controlled and heated by a maximum electrical power of 2 kW. This allows testing of the effects of series and parallel connection of TEGs under mismatched conditions, e.g., dimensions, clamping force, temperature, etc. The test rig can be employed both as a performance evaluator and as a quality control unit, due to the ability to provide nonparametric testing of four TEGs concurrently. It can also be used to rapidly characterize devices of different dimensions at the same time.

  8. Covariation in the human masticatory apparatus.

    PubMed

    Noback, Marlijn L; Harvati, Katerina

    2015-01-01

    Many studies have described shape variation of the modern human cranium in relation to subsistence; however, patterns of covariation within the masticatory apparatus (MA) remain largely unexplored. The patterns and intensity of shape covariation, and how this is related to diet, are essential for understanding the evolution of functional masticatory adaptations of the human cranium. Within a worldwide sample (n = 255) of 15 populations with different modes of subsistence, we use partial least squares analysis to study the relationships between three components of the MA: upper dental arch, masseter muscle, and temporalis muscle attachments. We show that the shape of the masseter muscle and the shape of the temporalis muscle clearly covary with one another, but that the shape of the dental arch seems to be rather independent of the masticatory muscles. On the contrary, when relative positioning, orientation, and size of the masticatory components is included in the analysis, the dental arch shows the highest covariation with the other cranial parts, indicating that these additional factors are more important than just shape with regard to covariation within the MA. Covariation patterns among these cranial regions differ mainly between hunting-fishing and gathering-agriculture groups, possibly relating to greater masticatory strains resulting from a large meat component in the diet. High-strain groups show stronger covariation between upper dental arch and masticatory muscle shape when compared with low-strain groups. These results help to provide a clearer understanding of constraints and interlinkage of shape variation within the human MA and allow for more realistic modeling and predictions in future biomechanical studies.

  9. Nonparametric solutions to the variational principle of ideal magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Betancourt, O. L.; Mcfadden, G.

    1985-01-01

    In an effort to gain a better understanding of MHD equilibria in three dimensions, the lower dimensional cases are studied. The solution of the three-dimensional problem is based on the classical variational principle of ideal magnetohydrodynamics. The crucial assumption for the numerical method is the existence of a nested set of toroidal flux surfaces, which is then used as a coordinate. This paper studies the nonparametric solutions to this variational problem in those cases when the direct solution is known to have islands. A form of the variational principle for the slab geometry is described; the one-dimensional problem is analyzed; and asymptotic expansions and numerical solutions to the two-dimensional problem are discussed. An example is presented which shows that the assumption of nested flux surfaces need not rule out the occurrence of islands.

  10. Bayesian Nonparametric Shrinkage Applied to Cepheid Star Oscillations.

    PubMed

    Berger, James; Jefferys, William; Müller, Peter

    2012-01-01

    Bayesian nonparametric regression with dependent wavelets has dual shrinkage properties: there is shrinkage through a dependent prior put on functional differences, and shrinkage through the setting of most of the wavelet coefficients to zero through Bayesian variable selection methods. The methodology can deal with unequally spaced data and is efficient because of the existence of fast moves in model space for the MCMC computation. The methodology is illustrated on the problem of modeling the oscillations of Cepheid variable stars; these are a class of pulsating variable stars with the useful property that their periods of variability are strongly correlated with their absolute luminosity. Once this relationship has been calibrated, knowledge of the period gives knowledge of the luminosity. This makes these stars useful as "standard candles" for estimating distances in the universe. PMID:24368873

  11. A non-parametric segmentation methodology for oral videocapillaroscopic images.

    PubMed

    Bellavia, Fabio; Cacioppo, Antonino; Lupaşcu, Carmen Alina; Messina, Pietro; Scardina, Giuseppe; Tegolo, Domenico; Valenti, Cesare

    2014-05-01

    We aim to describe a new non-parametric methodology to support the clinician during the diagnostic process of oral videocapillaroscopy to evaluate peripheral microcirculation. Our methodology, mainly based on wavelet analysis and mathematical morphology to preprocess the images, segments them by minimizing the within-class luminosity variance of both capillaries and background. Experiments were carried out on a set of real microphotographs to validate this approach versus handmade segmentations provided by physicians. By using a leave-one-patient-out approach, we pointed out that our methodology is robust, according to precision-recall criteria (average precision and recall are equal to 0.924 and 0.923, respectively) and it acts as a physician in terms of the Jaccard index (mean and standard deviation equal to 0.858 and 0.064, respectively). PMID:24657094

  12. Fast Nonparametric Clustering of Structured Time-Series.

    PubMed

    Hensman, James; Rattray, Magnus; Lawrence, Neil D

    2015-02-01

    In this publication, we combine two Bayesian nonparametric models: the Gaussian Process (GP) and the Dirichlet Process (DP). Our innovation in the GP model is to introduce a variation on the GP prior which enables us to model structured time-series data, i.e., data containing groups where we wish to model inter- and intra-group variability. Our innovation in the DP model is an implementation of a new fast collapsed variational inference procedure which enables us to optimize our variational approximation significantly faster than standard VB approaches. In a biological time series application we show how our model better captures salient features of the data, leading to better consistency with existing biological classifications, while the associated inference algorithm provides a significant speed-up over EM-based variational inference. PMID:26353249

  13. Bayesian Nonparametric Shrinkage Applied to Cepheid Star Oscillations.

    PubMed

    Berger, James; Jefferys, William; Müller, Peter

    2012-01-01

    Bayesian nonparametric regression with dependent wavelets has dual shrinkage properties: there is shrinkage through a dependent prior put on functional differences, and shrinkage through the setting of most of the wavelet coefficients to zero through Bayesian variable selection methods. The methodology can deal with unequally spaced data and is efficient because of the existence of fast moves in model space for the MCMC computation. The methodology is illustrated on the problem of modeling the oscillations of Cepheid variable stars; these are a class of pulsating variable stars with the useful property that their periods of variability are strongly correlated with their absolute luminosity. Once this relationship has been calibrated, knowledge of the period gives knowledge of the luminosity. This makes these stars useful as "standard candles" for estimating distances in the universe.

  14. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable. PMID:18238122

  15. A non-parametric segmentation methodology for oral videocapillaroscopic images.

    PubMed

    Bellavia, Fabio; Cacioppo, Antonino; Lupaşcu, Carmen Alina; Messina, Pietro; Scardina, Giuseppe; Tegolo, Domenico; Valenti, Cesare

    2014-05-01

    We aim to describe a new non-parametric methodology to support the clinician during the diagnostic process of oral videocapillaroscopy to evaluate peripheral microcirculation. Our methodology, mainly based on wavelet analysis and mathematical morphology to preprocess the images, segments them by minimizing the within-class luminosity variance of both capillaries and background. Experiments were carried out on a set of real microphotographs to validate this approach versus handmade segmentations provided by physicians. By using a leave-one-patient-out approach, we pointed out that our methodology is robust, according to precision-recall criteria (average precision and recall are equal to 0.924 and 0.923, respectively) and it acts as a physician in terms of the Jaccard index (mean and standard deviation equal to 0.858 and 0.064, respectively).

  16. Hyperspectral image segmentation using a cooperative nonparametric approach

    NASA Astrophysics Data System (ADS)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  17. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    SciTech Connect

    Williams, Mark L; Rearden, Bradley T

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  18. A Covariance Generation Methodology for Fission Product Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; Vallet, V.; De Saint Jean, C.; Sumini, M.

    2016-03-01

    Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1) no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM) implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation), developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.

  19. Upper and lower covariance bounds for perturbed linear systems

    NASA Technical Reports Server (NTRS)

    Xu, J.-H.; Skelton, R. E.; Zhu, G.

    1990-01-01

    Both upper and lower bounds are established for state covariance matrices under parameter perturbations of the plant. The motivation for this study lies in the fact that many robustness properties of linear systems are given explicitly in terms of the state covariance matrix. Moreover, there exists a theory for control by covariance assignment. The results provide robustness properties of these covariance controllers.

  20. Isavuconazole Population Pharmacokinetic Analysis Using Nonparametric Estimation in Patients with Invasive Fungal Disease (Results from the VITAL Study)

    PubMed Central

    Kovanda, Laura L.; Desai, Amit V.; Lu, Qiaoyang; Townsend, Robert W.; Akhtar, Shahzad; Bonate, Peter

    2016-01-01

    Isavuconazonium sulfate (Cresemba; Astellas Pharma Inc.), a water-soluble prodrug of the triazole antifungal agent isavuconazole, is available for the treatment of invasive aspergillosis (IA) and invasive mucormycosis. A population pharmacokinetic (PPK) model was constructed using nonparametric estimation to compare the pharmacokinetic (PK) behaviors of isavuconazole in patients treated in the phase 3 VITAL open-label clinical trial, which evaluated the efficacy and safety of the drug for treatment of renally impaired IA patients and patients with invasive fungal disease (IFD) caused by emerging molds, yeasts, and dimorphic fungi. Covariates examined were body mass index (BMI), weight, race, impact of estimated glomerular filtration rate (eGFR) on clearance (CL), and impact of weight on volume. PK parameters were compared based on IFD type and other patient characteristics. Simulations were performed to describe the MICs covered by the clinical dosing regimen. Concentrations (n = 458) from 136 patients were used to construct a 2-compartment model (first-order absorption compartment and central compartment). Weight-related covariates affected clearance, but eGFR did not. PK parameters and intersubject variability of CL were similar across different IFD groups and populations. Target attainment analyses demonstrated that the clinical dosing regimen would be sufficient for total drug area under the concentration-time curve (AUC)/MIC targets ranging from 50.5 for Aspergillus spp. (up to the CLSI MIC of 0.5 mg/liter) to 270 and 5,053 for Candida albicans (up to MICs of 0.125 and 0.004 mg/liter, respectively) and 312 for non-albicans Candida spp. (up to a MIC of 0.125 mg/liter). The estimations for Candida spp. were exploratory considering that no patients with Candida infections were included in the current analyses. (The VITAL trial is registered at ClinicalTrials.gov under number NCT00634049.) PMID:27185799

  1. Isavuconazole Population Pharmacokinetic Analysis Using Nonparametric Estimation in Patients with Invasive Fungal Disease (Results from the VITAL Study).

    PubMed

    Kovanda, Laura L; Desai, Amit V; Lu, Qiaoyang; Townsend, Robert W; Akhtar, Shahzad; Bonate, Peter; Hope, William W

    2016-08-01

    Isavuconazonium sulfate (Cresemba; Astellas Pharma Inc.), a water-soluble prodrug of the triazole antifungal agent isavuconazole, is available for the treatment of invasive aspergillosis (IA) and invasive mucormycosis. A population pharmacokinetic (PPK) model was constructed using nonparametric estimation to compare the pharmacokinetic (PK) behaviors of isavuconazole in patients treated in the phase 3 VITAL open-label clinical trial, which evaluated the efficacy and safety of the drug for treatment of renally impaired IA patients and patients with invasive fungal disease (IFD) caused by emerging molds, yeasts, and dimorphic fungi. Covariates examined were body mass index (BMI), weight, race, impact of estimated glomerular filtration rate (eGFR) on clearance (CL), and impact of weight on volume. PK parameters were compared based on IFD type and other patient characteristics. Simulations were performed to describe the MICs covered by the clinical dosing regimen. Concentrations (n = 458) from 136 patients were used to construct a 2-compartment model (first-order absorption compartment and central compartment). Weight-related covariates affected clearance, but eGFR did not. PK parameters and intersubject variability of CL were similar across different IFD groups and populations. Target attainment analyses demonstrated that the clinical dosing regimen would be sufficient for total drug area under the concentration-time curve (AUC)/MIC targets ranging from 50.5 for Aspergillus spp. (up to the CLSI MIC of 0.5 mg/liter) to 270 and 5,053 for Candida albicans (up to MICs of 0.125 and 0.004 mg/liter, respectively) and 312 for non-albicans Candida spp. (up to a MIC of 0.125 mg/liter). The estimations for Candida spp. were exploratory considering that no patients with Candida infections were included in the current analyses. (The VITAL trial is registered at ClinicalTrials.gov under number NCT00634049.). PMID:27185799

  2. FAST NEUTRON COVARIANCES FOR EVALUATED DATA FILES.

    SciTech Connect

    HERMAN, M.; OBLOZINSKY, P.; ROCHMAN, D.; KAWANO, T.; LEAL, L.

    2006-06-05

    We describe implementation of the KALMAN code in the EMPIRE system and present first covariance data generated for Gd and Ir isotopes. A complete set of covariances, in the full energy range, was produced for the chain of 8 Gadolinium isotopes for total, elastic, capture, total inelastic (MT=4), (n,2n), (n,p) and (n,alpha) reactions. Our correlation matrices, based on combination of model calculations and experimental data, are characterized by positive mid-range and negative long-range correlations. They differ from the model-generated covariances that tend to show strong positive long-range correlations and those determined solely from experimental data that result in nearly diagonal matrices. We have studied shapes of correlation matrices obtained in the calculations and interpreted them in terms of the underlying reaction models. An important result of this study is the prediction of narrow energy ranges with extremely small uncertainties for certain reactions (e.g., total and elastic).

  3. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1975-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  4. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1977-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  5. Sex differences in volume and structural covariance of the anterior and posterior hippocampus.

    PubMed

    Persson, Jonas; Spreng, R Nathan; Turner, Gary; Herlitz, Agneta; Morell, Arvid; Stening, Eva; Wahlund, Lars-Olof; Wikström, Johan; Söderlund, Hedvig

    2014-10-01

    Sex differences in episodic and spatial memory are frequently observed, suggesting that there may be sex-related structural differences in the hippocampus (HC). Earlier findings are inconsistent, possibly due to a known variability along the hippocampal longitudinal axis. Here, we assessed potential sex differences in hippocampal volume and structural covariance with the rest of the brain in young men and women (N=76), considering the anterior (aHC) and posterior (pHC) hippocampus separately. Women exhibited a larger pHC than men adjusted for brain size. Using partial least squares, we identified two significant patterns of structural covariance of the aHC and pHC. The first included brain areas that covaried positively and negatively in volume with both the aHC and pHC in men, but showed greater covariance with the aHC than pHC in women. The second pattern revealed distinct structural covariance of the aHC and pHC that showed a clear difference between men and women: in men the pHC showed reliable structural covariance with the medial and lateral parietal lobes and the prefrontal cortex, whereas in women the aHC showed reliable structural covariance with the anterior temporal lobe bilaterally. This pattern converges with resting state functional connectivity of the aHC and pHC and suggests that these hippocampal sections interact with different brain regions, consistent with a division of labor with regards to episodic and spatial memory. Our findings lend support to a division of the HC into an anterior and posterior part and identify sex as a potential moderating factor when investigating hippocampal structure and connectivity.

  6. Covariant theory with a confined quantum

    SciTech Connect

    Noyes, H.P.; Pastrana, G.

    1983-06-01

    It has been shown by Lindesay, Noyes and Lindesay, and by Lindesay and Markevich that by using a simple unitary two particle driving term in covariant Faddeev equations a rich covariant and unitary three particle dynamics can be generated, including single quantum exchange and production. The basic observation on which this paper rests is that if the two particle input amplitudes used as driving terms in a three particle Faddeev equation are assumed to be simply bound state poles with no elastic scattering cut, they generate rearrangement collisions, but breakup is impossible.

  7. Parametric number covariance in quantum chaotic spectra.

    PubMed

    Vinayak; Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.

  8. Covariance analysis of gamma ray spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-15

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  9. Covariance Analysis of Gamma Ray Spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-01

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  10. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    PubMed

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity.

  11. Economical phase-covariant cloning of qudits

    SciTech Connect

    Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara

    2005-04-01

    We derive the optimal N{yields}M phase-covariant quantum cloning for equatorial states in dimension d with M=kd+N, k integer. The cloning maps are optimal for both global and single-qudit fidelity. The map is achieved by an 'economical' cloning machine, which works without ancilla.

  12. Hawking fluxes, back reaction and covariant anomalies

    NASA Astrophysics Data System (ADS)

    Kulkarni, Shailesh

    2008-11-01

    Starting from the chiral covariant effective action approach of Banerjee and Kulkarni (2008 Phys. Lett. B 659 827), we provide a derivation of the Hawking radiation from a charged black hole in the presence of gravitational back reaction. The modified expressions for charge and energy flux, due to the effect of one-loop back reaction are obtained.

  13. Rasch's Multiplicative Poisson Model with Covariates.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1996-01-01

    Rasch's multiplicative Poisson model is extended so that parameters for individuals in the prior gamma distribution have continuous covariates. Parameters for individuals are integrated out, and hyperparameters in the prior distribution are estimated by a numerical method separately from difficulty parameters that are treated as fixed parameters…

  14. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  15. A Covariance NMR Toolbox for MATLAB and OCTAVE

    NASA Astrophysics Data System (ADS)

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.

  16. A covariance NMR toolbox for MATLAB and OCTAVE.

    PubMed

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. PMID:21215669

  17. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  18. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  19. Nonparametric directionality measures for time series and point process data.

    PubMed

    Halliday, David M

    2015-06-01

    The need to determine the directionality of interactions between neural signals is a key requirement for analysis of multichannel recordings. Approaches most commonly used are parametric, typically relying on autoregressive models. A number of concerns have been expressed regarding parametric approaches, thus there is a need to consider alternatives. We present an alternative nonparametric approach for construction of directionality measures for bivariate random processes. The method combines time and frequency domain representations of bivariate data to decompose the correlation by direction. Our framework generates two sets of complementary measures, a set of scalar measures, which decompose the total product moment correlation coefficient summatively into three terms by direction and a set of functions which decompose the coherence summatively at each frequency into three terms by direction: forward direction, reverse direction and instantaneous interaction. It can be undertaken as an addition to a standard bivariate spectral and coherence analysis, and applied to either time series or point-process (spike train) data or mixtures of the two (hybrid data). In this paper, we demonstrate application to spike train data using simulated cortical neurone networks and application to experimental data from isolated muscle spindle sensory endings subject to random efferent stimulation. PMID:25958923

  20. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2015-04-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

  1. Non-parametric reconstruction of cosmological matter perturbations

    NASA Astrophysics Data System (ADS)

    González, J. E.; Alcaniz, J. S.; Carvalho, J. C.

    2016-04-01

    Perturbative quantities, such as the growth rate (f) and index (γ), are powerful tools to distinguish different dark energy models or modified gravity theories even if they produce the same cosmic expansion history. In this work, without any assumption about the dynamics of the Universe, we apply a non-parametric method to current measurements of the expansion rate H(z) from cosmic chronometers and high-z quasar data and reconstruct the growth factor and rate of linearised density perturbations in the non-relativistic matter component. Assuming realistic values for the matter density parameter Ωm0, as provided by current CMB experiments, we also reconstruct the evolution of the growth index γ with redshift. We show that the reconstruction of current H(z) data constrains the growth index to γ=0.56 ± 0.12 (2σ) at z = 0.09, which is in full agreement with the prediction of the ΛCDM model and some of its extensions.

  2. Doping:. a New Non-Parametric Deprojection Scheme

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Dalia; Ferrarese, Laura

    We present a new non-parametric deprojection algorithm, DOPING (Deprojection of Observed Photometry using an INverse Gambit), which is designed to extract the three-dimensional luminosity density distribution ρ, from the observed surface brightness profile of an astrophysical system such as a galaxy or a galaxy cluster, in a generalised geometry, while taking into account changes in the intrinsic shape of the system. The observable is the 2D surface brightness distribution of the system. While the deprojection schemes presented hitherto have always worked within the limits of an assumed intrinsic geometry, in DOPING, geometry and inclination can be provided as inputs. The ρ that is most likely to project to the observed brightness data is sought; the maximisation of the likelihood is performed with the Metropolis algorithm. Unless the likelihood function is maximised, ρ is tweaked in shape and amplitude, while maintaining positivity, but otherwise the luminosity distribution is allowed to be completely free-form. Tests and applications of the algorithm are discussed.

  3. Doping:. a New Non-Parametric Deprojection Scheme

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Dalia; Ferrarese, Laura

    2007-12-01

    We present a new non-parametric deprojection algorithm DOPING (Deprojection of Observed Photometry using and INverse Gambit), that is designed to extract the three dimensional luminosity density distribution ρ, from the observed surface brightness profile of an astrophysical system such as a galaxy or a galaxy cluster, in a generalised geometry, while taking into account changes in the intrinsic shape of the system. The observable is the 2-D surface brightness distribution of the system. While the deprojection schemes presented hitherto have always worked within the limits of an assumed intrinsic geometry, in DOPING, geometry and inclination can be provided as inputs. The ρ that is most likely to project to the observed brightness data is sought; the maximisation of the likelihood is performed with the Metropolis algorithm. Unless the likelihood function is maximised, ρ is tweaked in shape and amplitude, while maintaining monotonicity and positivity, but otherwise the luminosity distribution is allowed to be completely free-form. Tests and applications of the algorithm are discussed.

  4. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians. PMID:19041946

  5. Nonparametric Spatial Models for Extremes: Application to Extreme Temperature Data.

    PubMed

    Fuentes, Montserrat; Henry, John; Reich, Brian

    2013-03-01

    Estimating the probability of extreme temperature events is difficult because of limited records across time and the need to extrapolate the distributions of these events, as opposed to just the mean, to locations where observations are not available. Another related issue is the need to characterize the uncertainty in the estimated probability of extreme events at different locations. Although the tools for statistical modeling of univariate extremes are well-developed, extending these tools to model spatial extreme data is an active area of research. In this paper, in order to make inference about spatial extreme events, we introduce a new nonparametric model for extremes. We present a Dirichlet-based copula model that is a flexible alternative to parametric copula models such as the normal and t-copula. The proposed modelling approach is fitted using a Bayesian framework that allow us to take into account different sources of uncertainty in the data and models. We apply our methods to annual maximum temperature values in the east-south-central United States. PMID:24058280

  6. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  7. Nonparametric Bayes Classification and Hypothesis Testing on Manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David

    2012-01-01

    Our first focus is prediction of a categorical response variable using features that lie on a general manifold. For example, the manifold may correspond to the surface of a hypersphere. We propose a general kernel mixture model for the joint distribution of the response and predictors, with the kernel expressed in product form and dependence induced through the unknown mixing measure. We provide simple sufficient conditions for large support and weak and strong posterior consistency in estimating both the joint distribution of the response and predictors and the conditional distribution of the response. Focusing on a Dirichlet process prior for the mixing measure, these conditions hold using von Mises-Fisher kernels when the manifold is the unit hypersphere. In this case, Bayesian methods are developed for efficient posterior computation using slice sampling. Next we develop Bayesian nonparametric methods for testing whether there is a difference in distributions between groups of observations on the manifold having unknown densities. We prove consistency of the Bayes factor and develop efficient computational methods for its calculation. The proposed classification and testing methods are evaluated using simulation examples and applied to spherical data applications. PMID:22754028

  8. Nonparametric estimation of quantum states, processes and measurements

    NASA Astrophysics Data System (ADS)

    Lougovski, Pavel; Bennink, Ryan

    Quantum state, process, and measurement estimation methods traditionally use parametric models, in which the number and role of relevant parameters is assumed to be known. When such an assumption cannot be justified, a common approach in many disciplines is to fit the experimental data to multiple models with different sets of parameters and utilize an information criterion to select the best fitting model. However, it is not always possible to assume a model with a finite (countable) number of parameters. This typically happens when there are unobserved variables that stem from hidden correlations that can only be unveiled after collecting experimental data. How does one perform quantum characterization in this situation? We present a novel nonparametric method of experimental quantum system characterization based on the Dirichlet Process (DP) that addresses this problem. Using DP as a prior in conjunction with Bayesian estimation methods allows us to increase model complexity (number of parameters) adaptively as the number of experimental observations grows. We illustrate our approach for the one-qubit case and show how a probability density function for an unknown quantum process can be estimated.

  9. Bayesian Nonparametric Inference of Population Size Changes from Sequential Genealogies.

    PubMed

    Palacios, Julia A; Wakeley, John; Ramachandran, Sohini

    2015-09-01

    Sophisticated inferential tools coupled with the coalescent model have recently emerged for estimating past population sizes from genomic data. Recent methods that model recombination require small sample sizes, make constraining assumptions about population size changes, and do not report measures of uncertainty for estimates. Here, we develop a Gaussian process-based Bayesian nonparametric method coupled with a sequentially Markov coalescent model that allows accurate inference of population sizes over time from a set of genealogies. In contrast to current methods, our approach considers a broad class of recombination events, including those that do not change local genealogies. We show that our method outperforms recent likelihood-based methods that rely on discretization of the parameter space. We illustrate the application of our method to multiple demographic histories, including population bottlenecks and exponential growth. In simulation, our Bayesian approach produces point estimates four times more accurate than maximum-likelihood estimation (based on the sum of absolute differences between the truth and the estimated values). Further, our method's credible intervals for population size as a function of time cover 90% of true values across multiple demographic scenarios, enabling formal hypothesis testing about population size differences over time. Using genealogies estimated with ARGweaver, we apply our method to European and Yoruban samples from the 1000 Genomes Project and confirm key known aspects of population size history over the past 150,000 years. PMID:26224734

  10. Transition redshift: new constraints from parametric and nonparametric methods

    NASA Astrophysics Data System (ADS)

    Rani, Nisha; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha; Pires, Nilza

    2015-12-01

    In this paper, we use the cosmokinematics approach to study the accelerated expansion of the Universe. This is a model independent approach and depends only on the assumption that the Universe is homogeneous and isotropic and is described by the FRW metric. We parametrize the deceleration parameter, q(z), to constrain the transition redshift (zt) at which the expansion of the Universe goes from a decelerating to an accelerating phase. We use three different parametrizations of q(z) namely, qI(z)=q1+q2z, qII (z) = q3 + q4 ln (1 + z) and qIII (z)=½+q5/(1+z)2. A joint analysis of the age of galaxies, strong lensing and supernovae Ia data indicates that the transition redshift is less than unity i.e. zt < 1. We also use a nonparametric approach (LOESS+SIMEX) to constrain zt. This too gives zt < 1 which is consistent with the value obtained by the parametric approach.

  11. Transition redshift: new constraints from parametric and nonparametric methods

    SciTech Connect

    Rani, Nisha; Mahajan, Shobhit; Mukherjee, Amitabha; Jain, Deepak; Pires, Nilza E-mail: djain@ddu.du.ac.in E-mail: amimukh@gmail.com

    2015-12-01

    In this paper, we use the cosmokinematics approach to study the accelerated expansion of the Universe. This is a model independent approach and depends only on the assumption that the Universe is homogeneous and isotropic and is described by the FRW metric. We parametrize the deceleration parameter, q(z), to constrain the transition redshift (z{sub t}) at which the expansion of the Universe goes from a decelerating to an accelerating phase. We use three different parametrizations of q(z) namely, q{sub I}(z)=q{sub 1}+q{sub 2}z, q{sub II} (z) = q{sub 3} + q{sub 4} ln (1 + z) and q{sub III} (z)=½+q{sub 5}/(1+z){sup 2}. A joint analysis of the age of galaxies, strong lensing and supernovae Ia data indicates that the transition redshift is less than unity i.e. z{sub t} < 1. We also use a nonparametric approach (LOESS+SIMEX) to constrain z{sub t}. This too gives z{sub t} < 1 which is consistent with the value obtained by the parametric approach.

  12. Non-parametric extraction of implied asset price distributions

    NASA Astrophysics Data System (ADS)

    Healy, Jerome V.; Dixon, Maurice; Read, Brian J.; Cai, Fang Fang

    2007-08-01

    We present a fully non-parametric method for extracting risk neutral densities (RNDs) from observed option prices. The aim is to obtain a continuous, smooth, monotonic, and convex pricing function that is twice differentiable. Thus, irregularities such as negative probabilities that afflict many existing RND estimation techniques are reduced. Our method employs neural networks to obtain a smoothed pricing function, and a central finite difference approximation to the second derivative to extract the required gradients. This novel technique was successfully applied to a large set of FTSE 100 daily European exercise (ESX) put options data and as an Ansatz to the corresponding set of American exercise (SEI) put options. The results of paired t-tests showed significant differences between RNDs extracted from ESX and SEI option data, reflecting the distorting impact of early exercise possibility for the latter. In particular, the results for skewness and kurtosis suggested different shapes for the RNDs implied by the two types of put options. However, both ESX and SEI data gave an unbiased estimate of the realised FTSE 100 closing prices on the options’ expiration date. We confirmed that estimates of volatility from the RNDs of both types of option were biased estimates of the realised volatility at expiration, but less so than the LIFFE tabulated at-the-money implied volatility.

  13. Nonparametric statistical tests for the continuous data: the basic concept and the practical use

    PubMed Central

    2016-01-01

    Conventional statistical tests are usually called parametric tests. Parametric tests are used more frequently than nonparametric tests in many medical articles, because most of the medical researchers are familiar with and the statistical software packages strongly support parametric tests. Parametric tests require important assumption; assumption of normality which means that distribution of sample means is normally distributed. However, parametric test can be misleading when this assumption is not satisfied. In this circumstance, nonparametric tests are the alternative methods available, because they do not required the normality assumption. Nonparametric tests are the statistical methods based on signs and ranks. In this article, we will discuss about the basic concepts and practical use of nonparametric tests for the guide to the proper use. PMID:26885295

  14. Genetic and Environmental Components of Adolescent Adjustment and Parental Behavior: A Multivariate Analysis

    ERIC Educational Resources Information Center

    Loehlin, John C.; Neiderhiser, Jenae M.; Reiss, David

    2005-01-01

    Adolescent adjustment measures may be related to each other and to the social environment in various ways. Are these relationships similar in genetic and environmental sources of covariation, or different? A multivariate behaviorgenetic analysis was made of 6 adjustment and 3 treatment composites from the study Nonshared Environment in Adolescent…

  15. Nonparametric analysis of Minnesota spruce and aspen tree data and LANDSAT data

    NASA Technical Reports Server (NTRS)

    Scott, D. W.; Jee, R.

    1984-01-01

    The application of nonparametric methods in data-intensive problems faced by NASA is described. The theoretical development of efficient multivariate density estimators and the novel use of color graphics workstations are reviewed. The use of nonparametric density estimates for data representation and for Bayesian classification are described and illustrated. Progress in building a data analysis system in a workstation environment is reviewed and preliminary runs presented.

  16. Construction of Covariance Functions with Variable Length Fields

    NASA Technical Reports Server (NTRS)

    Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven

    2005-01-01

    This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.

  17. An efficient Bayesian inference framework for coalescent-based nonparametric phylodynamics

    PubMed Central

    Lan, Shiwei; Palacios, Julia A.; Karcher, Michael; Minin, Vladimir N.; Shahbaba, Babak

    2015-01-01

    Motivation: The field of phylodynamics focuses on the problem of reconstructing population size dynamics over time using current genetic samples taken from the population of interest. This technique has been extensively used in many areas of biology but is particularly useful for studying the spread of quickly evolving infectious diseases agents, e.g. influenza virus. Phylodynamic inference uses a coalescent model that defines a probability density for the genealogy of randomly sampled individuals from the population. When we assume that such a genealogy is known, the coalescent model, equipped with a Gaussian process prior on population size trajectory, allows for nonparametric Bayesian estimation of population size dynamics. Although this approach is quite powerful, large datasets collected during infectious disease surveillance challenge the state-of-the-art of Bayesian phylodynamics and demand inferential methods with relatively low computational cost. Results: To satisfy this demand, we provide a computationally efficient Bayesian inference framework based on Hamiltonian Monte Carlo for coalescent process models. Moreover, we show that by splitting the Hamiltonian function, we can further improve the efficiency of this approach. Using several simulated and real datasets, we show that our method provides accurate estimates of population size dynamics and is substantially faster than alternative methods based on elliptical slice sampler and Metropolis-adjusted Langevin algorithm. Availability and implementation: The R code for all simulation studies and real data analysis conducted in this article are publicly available at http://www.ics.uci.edu/∼slan/lanzi/CODES.html and in the R package phylodyn available at https://github.com/mdkarcher/phylodyn. Contact: S.Lan@warwick.ac.uk or babaks@uci.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26093147

  18. On covariance structure in noisy, big data

    NASA Astrophysics Data System (ADS)

    Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.

    2013-09-01

    Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.

  19. Covariance Spectroscopy Applied to Nuclear Radiation Detection

    SciTech Connect

    Trainham, R., Tinsley, J., Keegan, R., Quam, W.

    2011-09-01

    Covariance spectroscopy is a method of processing second order moments of data to obtain information that is usually absent from average spectra. In nuclear radiation detection it represents a generalization of nuclear coincidence techniques. Correlations and fluctuations in data encode valuable information about radiation sources, transport media, and detection systems. Gaining access to the extra information can help to untangle complicated spectra, uncover overlapping peaks, accelerate source identification, and even sense directionality. Correlations existing at the source level are particularly valuable since many radioactive isotopes emit correlated gammas and neutrons. Correlations also arise from interactions within detector systems, and from scattering in the environment. In particular, correlations from Compton scattering and pair production within a detector array can be usefully exploited in scenarios where direct measurement of source correlations would be unfeasible. We present a covariance analysis of a few experimental data sets to illustrate the utility of the concept.

  20. Covariance and the hierarchy of frame bundles

    NASA Technical Reports Server (NTRS)

    Estabrook, Frank B.

    1987-01-01

    This is an essay on the general concept of covariance, and its connection with the structure of the nested set of higher frame bundles over a differentiable manifold. Examples of covariant geometric objects include not only linear tensor fields, densities and forms, but affinity fields, sectors and sector forms, higher order frame fields, etc., often having nonlinear transformation rules and Lie derivatives. The intrinsic, or invariant, sets of forms that arise on frame bundles satisfy the graded Cartan-Maurer structure equations of an infinite Lie algebra. Reduction of these gives invariant structure equations for Lie pseudogroups, and for G-structures of various orders. Some new results are introduced for prolongation of structure equations, and for treatment of Riemannian geometry with higher-order moving frames. The use of invariant form equations for nonlinear field physics is implicitly advocated.

  1. Covariant quantum mechanics applied to noncommutative geometry

    NASA Astrophysics Data System (ADS)

    Astuti, Valerio

    2015-08-01

    We here report a result obtained in collaboration with Giovanni Amelino-Camelia, first shown in the paper [1]. Applying the manifestly covariant formalism of quantum mechanics to the much studied Snyder spacetime [2] we show how it is trivial in every physical observables, this meaning that every measure in this spacetime gives the same results that would be obtained in the flat Minkowski spacetime.

  2. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  3. Generalized Covariance Analysis For Remote Estimators

    NASA Technical Reports Server (NTRS)

    Boone, Jack N.

    1991-01-01

    Technique developed to predict true covariance of stochastic process at remote location when control applied to process both by autonomous (local-estimator) control subsystem and remote (non-local-estimator) control subsystem. Intended orginally for design and evaluation of ground-based schemes for estimation of gyro parameters of Magellan spacecraft. Applications include variety of remote-control systems with and without delays. Potential terrestrial applications include navigation and control of industrial processes.

  4. Torsion and geometrostasis in covariant superstrings

    SciTech Connect

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.

  5. A Nonparametric Approach to Automated S-Wave Picking

    NASA Astrophysics Data System (ADS)

    Rawles, C.; Thurber, C. H.

    2014-12-01

    Although a number of very effective P-wave automatic pickers have been developed over the years, automatic picking of S waves has remained more challenging. Most automatic pickers take a parametric approach, whereby some characteristic function (CF), e.g. polarization or kurtosis, is determined from the data and the pick is estimated from the CF. We have adopted a nonparametric approach, estimating the pick directly from the waveforms. For a particular waveform to be auto-picked, the method uses a combination of similarity to a set of seismograms with known S-wave arrivals and dissimilarity to a set of seismograms that do not contain S-wave arrivals. Significant effort has been made towards dealing with the problem of S-to-P conversions. We have evaluated the effectiveness of our method by testing it on multiple sets of microearthquake seismograms with well-determined S-wave arrivals for several areas around the world, including fault zones and volcanic regions. In general, we find that the results from our auto-picker are consistent with reviewed analyst picks 90% of the time at the 0.2 s level and 80% of the time at the 0.1 s level, or better. For most of the large datasets we have analyzed, our auto-picker also makes far more S-wave picks than were made previously by analysts. We are using these enlarged sets of high-quality S-wave picks to refine tomographic inversions for these areas, resulting in substantial improvement in the quality of the S-wave images. We will show examples from New Zealand, Hawaii, and California.

  6. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  7. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  8. A robust nonparametric method for quantifying undetected extinctions.

    PubMed

    Chisholm, Ryan A; Giam, Xingli; Sadanandan, Keren R; Fung, Tak; Rheindt, Frank E

    2016-06-01

    How many species have gone extinct in modern times before being described by science? To answer this question, and thereby get a full assessment of humanity's impact on biodiversity, statistical methods that quantify undetected extinctions are required. Such methods have been developed recently, but they are limited by their reliance on parametric assumptions; specifically, they assume the pools of extant and undetected species decay exponentially, whereas real detection rates vary temporally with survey effort and real extinction rates vary with the waxing and waning of threatening processes. We devised a new, nonparametric method for estimating undetected extinctions. As inputs, the method requires only the first and last date at which each species in an ensemble was recorded. As outputs, the method provides estimates of the proportion of species that have gone extinct, detected, or undetected and, in the special case where the number of undetected extant species in the present day is assumed close to zero, of the absolute number of undetected extinct species. The main assumption of the method is that the per-species extinction rate is independent of whether a species has been detected or not. We applied the method to the resident native bird fauna of Singapore. Of 195 recorded species, 58 (29.7%) have gone extinct in the last 200 years. Our method projected that an additional 9.6 species (95% CI 3.4, 19.8) have gone extinct without first being recorded, implying a true extinction rate of 33.0% (95% CI 31.0%, 36.2%). We provide R code for implementing our method. Because our method does not depend on strong assumptions, we expect it to be broadly useful for quantifying undetected extinctions.

  9. A robust nonparametric method for quantifying undetected extinctions.

    PubMed

    Chisholm, Ryan A; Giam, Xingli; Sadanandan, Keren R; Fung, Tak; Rheindt, Frank E

    2016-06-01

    How many species have gone extinct in modern times before being described by science? To answer this question, and thereby get a full assessment of humanity's impact on biodiversity, statistical methods that quantify undetected extinctions are required. Such methods have been developed recently, but they are limited by their reliance on parametric assumptions; specifically, they assume the pools of extant and undetected species decay exponentially, whereas real detection rates vary temporally with survey effort and real extinction rates vary with the waxing and waning of threatening processes. We devised a new, nonparametric method for estimating undetected extinctions. As inputs, the method requires only the first and last date at which each species in an ensemble was recorded. As outputs, the method provides estimates of the proportion of species that have gone extinct, detected, or undetected and, in the special case where the number of undetected extant species in the present day is assumed close to zero, of the absolute number of undetected extinct species. The main assumption of the method is that the per-species extinction rate is independent of whether a species has been detected or not. We applied the method to the resident native bird fauna of Singapore. Of 195 recorded species, 58 (29.7%) have gone extinct in the last 200 years. Our method projected that an additional 9.6 species (95% CI 3.4, 19.8) have gone extinct without first being recorded, implying a true extinction rate of 33.0% (95% CI 31.0%, 36.2%). We provide R code for implementing our method. Because our method does not depend on strong assumptions, we expect it to be broadly useful for quantifying undetected extinctions. PMID:27153528

  10. Non-parametric combination and related permutation tests for neuroimaging.

    PubMed

    Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E

    2016-04-01

    In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. PMID:26848101

  11. All covariance controllers for linear discrete-time systems

    NASA Technical Reports Server (NTRS)

    Hsieh, Chen; Skelton, Robert E.

    1990-01-01

    The set of covariances that a linear discrete-time plant with a specified-order controller can have is characterized. The controllers that assign such covariances to any linear discrete-time system are given explicitly in closed form. The freedom in these covariance controllers is explicit and is parameterized by two orthogonal matrices. By appropriately choosing these free parameters, additional system objectives can be achieved without altering the state covariance, and the stability of the closed-loop system is guaranteed.

  12. Shrinkage covariance matrix approach for microarray data

    NASA Astrophysics Data System (ADS)

    Karjanto, Suryaefiza; Aripin, Rasimah

    2013-04-01

    Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n < p. This leads to a biased estimate of the covariance matrix. In this study, the Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.

  13. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  14. Covariance tracking: architecture optimizations for embedded systems

    NASA Astrophysics Data System (ADS)

    Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan

    2014-12-01

    Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.

  15. Development of Covariance Capabilities in EMPIRE Code

    SciTech Connect

    Herman, M. Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-12-15

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  16. Development of covariance capabilities in EMPIRE code

    SciTech Connect

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  17. ANALYSIS OF COVARIANCE WITH SPATIALLY CORRELATED SECONDARY VARIABLES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data sets which contain measurements on a spatially referenced response and covariate are analyzed using either co-kriging or spatial analysis of covariance. While co-kriging accounts for the correlation structure of the covariate, it is purely a predictive tool. Alternatively, spatial analysis of c...

  18. Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments

    ERIC Educational Resources Information Center

    Barker, Lynne A.; Andrade, Jackie

    2006-01-01

    In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…

  19. Earth Observation System Flight Dynamics System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  20. ADJUSTABLE DOUBLE PULSE GENERATOR

    DOEpatents

    Gratian, J.W.; Gratian, A.C.

    1961-08-01

    >A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)

  1. Estimation of Covariances on Prompt Fission Neutron Spectra and Impact of the PFNS Model on the Vessel Fluence

    NASA Astrophysics Data System (ADS)

    Berge, Léonie; Litaize, Olivier; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Pénéliau, Yannick; Regnier, David

    2016-02-01

    As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS) are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.

  2. Relationships between QT interval and heart rate variability at rest and the covariates in healthy young adults.

    PubMed

    Arai, Kaori; Nakagawa, Yui; Iwata, Toyoto; Horiguchi, Hyogo; Murata, Katsuyuki

    2013-01-01

    To clarify the links between ECG QT-related parameters and heart rate variability (HRV) and the covariates possibly distorting them, the averaged RR and QT intervals in a single lead ECG were measured for 64 male and 86 female subjects aged 18-26. The QT index, defined by Rautaharju et al., in the young adults was not significantly related to any HRV parameters nor heart rate, but the Bazett's corrected QT (QTc) interval was associated negatively with the parasympathetic activity and positively with heart rate. No significant differences in the QTc interval, QT index or heart rate were seen between the men and women, but they significantly differed between both sexes after adjustment for possible covariates such as age and body mass index (BMI). Significant sex differences in parasympathetic parameters of the HRV were unchanged before and after the adjustment, but significant differences observed in the unadjusted sympathetic parameters disappeared after adjusting for covariates. Age, BMI and body fat percentage also were significant covariates affecting these ECG parameters. Consequently, QT index, unaffected by heart rate and HRV parameters, appears to be a more useful indicator than the QTc interval. Instead, the QT index and HRV parameters are recommended to be simultaneously measured in epidemiological research because they are probably complementary in assessing autonomic nervous function. Also, these parameters should be analyzed in men and women separately.

  3. Classroom risks and resources: Teacher burnout, classroom quality and children's adjustment in high needs elementary schools.

    PubMed

    Hoglund, Wendy L G; Klingle, Kirsten E; Hosan, Naheed E

    2015-10-01

    The current paper presents two related sets of findings on the classroom context in high needs elementary schools. First, we investigated change over one school term in teacher burnout (emotional exhaustion, depersonalization, personal accomplishment) and classroom quality (emotional and instructional support, organization) and assessed the degree to which burnout and classroom quality co-varied over the term with each other and with aggregate externalizing behaviors (average child externalizing behaviors in the classroom). These analyses describe the classroom context in which the children are nested. Second, we examined change over one school term in children's social adjustment (relationship quality with teachers and friends) and academic adjustment (school engagement, literacy skills) and assessed how adjustment co-varied over time with child externalizing behaviors and was predicted by teacher burnout, classroom quality and aggregate externalizing behaviors. These models were tested with a sample of low-income, ethnically diverse children in kindergarten to grade 3 and their teachers. The children and teachers were assessed three times over one school term. Personal accomplishment co-varied positively with overall classroom quality. Reciprocally, classroom organization co-varied positively with overall teacher burnout. Aggregate externalizing behaviors co-varied positively with depersonalization and negatively with personal accomplishment and overall classroom quality, including emotional support and organization. In turn, teacher burnout interacted with aggregate externalizing behaviors to predict change in child social and academic adjustment. Alternatively, classroom quality interacted with aggregate and child externalizing behaviors to predict change in child social and academic adjustment.

  4. Classroom risks and resources: Teacher burnout, classroom quality and children's adjustment in high needs elementary schools.

    PubMed

    Hoglund, Wendy L G; Klingle, Kirsten E; Hosan, Naheed E

    2015-10-01

    The current paper presents two related sets of findings on the classroom context in high needs elementary schools. First, we investigated change over one school term in teacher burnout (emotional exhaustion, depersonalization, personal accomplishment) and classroom quality (emotional and instructional support, organization) and assessed the degree to which burnout and classroom quality co-varied over the term with each other and with aggregate externalizing behaviors (average child externalizing behaviors in the classroom). These analyses describe the classroom context in which the children are nested. Second, we examined change over one school term in children's social adjustment (relationship quality with teachers and friends) and academic adjustment (school engagement, literacy skills) and assessed how adjustment co-varied over time with child externalizing behaviors and was predicted by teacher burnout, classroom quality and aggregate externalizing behaviors. These models were tested with a sample of low-income, ethnically diverse children in kindergarten to grade 3 and their teachers. The children and teachers were assessed three times over one school term. Personal accomplishment co-varied positively with overall classroom quality. Reciprocally, classroom organization co-varied positively with overall teacher burnout. Aggregate externalizing behaviors co-varied positively with depersonalization and negatively with personal accomplishment and overall classroom quality, including emotional support and organization. In turn, teacher burnout interacted with aggregate externalizing behaviors to predict change in child social and academic adjustment. Alternatively, classroom quality interacted with aggregate and child externalizing behaviors to predict change in child social and academic adjustment. PMID:26407833

  5. Jointly Modeling Event Time and Skewed-Longitudinal Data with Missing Response and Mismeasured Covariate for AIDS Studies.

    PubMed

    Huang, Yangxin; Yan, Chunning; Xing, Dongyuan; Zhang, Nanhua; Chen, Henian

    2015-01-01

    In longitudinal studies it is often of interest to investigate how a repeatedly measured marker in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time-to-event data. Normality of model errors in longitudinal model is a routine assumption, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain between- and within-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. Moreover, the responses may encounter nonignorable missing. Statistical analysis may be complicated dramatically based on longitudinal-survival joint models where longitudinal data with skewness, missing values, and measurement errors are observed. In this article, we relax the distributional assumptions for the longitudinal models using skewed (parametric) distribution and unspecified (nonparametric) distribution placed by a Dirichlet process prior, and address the simultaneous influence of skewness, missingness, covariate measurement error, and time-to-event process by jointly modeling three components (response process with missing values, covariate process with measurement errors, and time-to-event process) linked through the random-effects that characterize the underlying individual-specific longitudinal processes in Bayesian analysis. The method is illustrated with an AIDS study by jointly modeling HIV/CD4 dynamics and time to viral rebound in comparison with potential models with various scenarios and different distributional specifications. PMID:24905593

  6. Quantum energy inequalities and local covariance II: categorical formulation

    NASA Astrophysics Data System (ADS)

    Fewster, Christopher J.

    2007-11-01

    We formulate quantum energy inequalities (QEIs) in the framework of locally covariant quantum field theory developed by Brunetti, Fredenhagen and Verch, which is based on notions taken from category theory. This leads to a new viewpoint on the QEIs, and also to the identification of a new structural property of locally covariant quantum field theory, which we call local physical equivalence. Covariant formulations of the numerical range and spectrum of locally covariant fields are given and investigated, and a new algebra of fields is identified, in which fields are treated independently of their realisation on particular spacetimes and manifestly covariant versions of the functional calculus may be formulated.

  7. USE OF HISTORICAL CONTROLS TO ADJUST FOR COVARIATES IN TREND TESTS FOR BINARY DATA. (R824757)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  8. Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2011-01-01

    This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…

  9. An adaptive distance measure for use with nonparametric models

    SciTech Connect

    Garvey, D. R.; Hines, J. W.

    2006-07-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  10. Coupled nucleotide covariations reveal dynamic RNA interaction patterns.

    PubMed Central

    Gultyaev, A P; Franch, T; Gerdes, K

    2000-01-01

    Evolutionarily conserved structures in related RNA molecules contain coordinated variations (covariations) of paired nucleotides. Analysis of covariations is a very powerful approach to deduce phylogenetically conserved (i.e., functional) conformations, including tertiary interactions. Here we discuss conserved RNA folding pathways that are revealed by covariation patterns. In such pathways, structural requirements for alternative pairings cause some nucleotides to covary with two different partners. Such "coupled" covariations between three or more nucleotides were found in various types of RNAs. The analysis of coupled covariations can unravel important features of RNA folding dynamics and improve phylogeny reconstruction in some cases. Importantly, it is necessary to distinguish between multiple covariations determined by mutually exclusive structures and those determined by tertiary contacts. PMID:11105748

  11. Computations of adjusted rates and lifetime risks from occupational cohort data: a program package using FORTRAN and GLIM.

    PubMed

    Zhou, S Y; Mazumdar, S; Redmond, C K; Dong, M H; Costantino, J P

    1991-02-01

    A program package using FORTRAN and GLIM is presented to compute lifetime risks of dying from a particular cause of death for a worker subjected to specific risk exposures using death rates adjusted for selected covariates (risk factors). Calculations of the exposure index and adjusted rates depend on several commonly used procedures. Tests of homogeneity and trend for adjusted rates are provided. Lifetime risks are calculated in two different ways: adjusting or ignoring competing causes of death.

  12. Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data.

    PubMed

    Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M

    2006-11-01

    The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We

  13. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  14. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  15. Covariant constraints in ghost free massive gravity

    SciTech Connect

    Deffayet, C.; Mourad, J.; Zahariade, G. E-mail: mourad@apc.univ-paris7.fr

    2013-01-01

    We show that the reformulation of the de Rham-Gabadadze-Tolley massive gravity theory using vielbeins leads to a very simple and covariant way to count constraints, and hence degrees of freedom. Our method singles out a subset of theories, in the de Rham-Gabadadze-Tolley family, where an extra constraint, needed to eliminate the Boulware Deser ghost, is easily seen to appear. As a side result, we also introduce a new method, different from the Stuckelberg trick, to extract kinetic terms for the polarizations propagating in addition to those of the massless graviton.

  16. Covariant harmonic oscillators and coupled harmonic oscillators

    NASA Technical Reports Server (NTRS)

    Han, Daesoo; Kim, Young S.; Noz, Marilyn E.

    1995-01-01

    It is shown that the system of two coupled harmonic oscillators shares the basic symmetry properties with the covariant harmonic oscillator formalism which provides a concise description of the basic features of relativistic hadronic features observed in high-energy laboratories. It is shown also that the coupled oscillator system has the SL(4,r) symmetry in classical mechanics, while the present formulation of quantum mechanics can accommodate only the Sp(4,r) portion of the SL(4,r) symmetry. The possible role of the SL(4,r) symmetry in quantum mechanics is discussed.

  17. Boost covariant gluon distributions in large nuclei

    NASA Astrophysics Data System (ADS)

    McLerran, Larry; Venugopalan, Raju

    1998-04-01

    It has been shown recently that there exist analytical solutions of the Yang-Mills equations for non-Abelian Weizsäcker-Williams fields which describe the distribution of gluons in large nuclei at small x. These solutions however depend on the color charge distribution at large rapidities. We here construct a model of the color charge distribution of partons in the fragmentation region and use it to compute the boost covariant momentum distributions of wee gluons. The phenomenological applications of our results are discussed.

  18. Cosmology of a covariant Galilean field.

    PubMed

    De Felice, Antonio; Tsujikawa, Shinji

    2010-09-10

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  19. EVALUATION OF TUNGSTEN ISOTOPES IN THE FAST NEUTRON RANGE INCLUDING CROSS-SECTION COVARIANCE ESTIMATION.

    SciTech Connect

    CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.

    2007-04-22

    New evaluations for the tungsten isotopes {sup 180,182,183,184,186}W in the neutron energy range up to 60 MeV were produced. In the resonance range only minor adjustments to the resonance parameters were made due to a lack of adequate experimental data. Evaluations in the fast energy region were based on nuclear model calculations using the EMPIRE-2.19 code. Recently derived dispersive coupled-channel optical model potentials for W and Ta isotopes were instrumental to achieve a very good description of the available microscopic cross-section database. Model covariance data were generated with the Monte Carlo technique to produce a prior estimate for the covariance matrix. Experimental data were introduced through the GANDR system. The evaluated files were tested on selected fusion neutronics benchmarks and showed marked improvement compared to other existing evaluations.

  20. Lasso adjustments of treatment effect estimates in randomized experiments

    PubMed Central

    Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S.; Yu, Bin

    2016-01-01

    We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman–Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS. PMID:27382153

  1. Lasso adjustments of treatment effect estimates in randomized experiments.

    PubMed

    Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S; Yu, Bin

    2016-07-01

    We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS. PMID:27382153

  2. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  3. Marital, reproductive, and educational behaviors covary with life expectancy.

    PubMed

    Krupp, Daniel Brian

    2012-12-01

    Theories of "life history evolution" suggest that individuals might adjust the timing of marriage and reproduction, as well as their propensity to terminate a marriage or pregnancy and invest in skill development, in response to indicators of the locally prevailing level of life expectancy. In particular, such theories generate the hypothesis that foreshortened time horizons lead to hastened reproduction and marriage whereas lengthier time horizons increase the likelihood of reproductive and marital termination and lead to greater investment in education. Here, I show that the scheduling and occurrence of marital and reproductive behavior (including both initiation and termination), as well as levels of educational attainment and investment, covary with life expectancy, even after controlling for the effects of affluence. In analyses of variation in marital, reproductive, and educational behaviors at two jurisdictional levels in Canada, life expectancy was positively correlated with patterns of age-specific fertility, age at first marriage, divorce, abortion, conferral of high school and higher education degrees (with the exception of the trades) and mean number of years of schooling. The large and highly consistent relationships observed between life expectancy and the behaviors under investigation suggest that these associations may be mediated by individual "perceptions" of life expectancy, though more research is needed before conclusions can be firmly reached. PMID:22484517

  4. Marital, reproductive, and educational behaviors covary with life expectancy.

    PubMed

    Krupp, Daniel Brian

    2012-12-01

    Theories of "life history evolution" suggest that individuals might adjust the timing of marriage and reproduction, as well as their propensity to terminate a marriage or pregnancy and invest in skill development, in response to indicators of the locally prevailing level of life expectancy. In particular, such theories generate the hypothesis that foreshortened time horizons lead to hastened reproduction and marriage whereas lengthier time horizons increase the likelihood of reproductive and marital termination and lead to greater investment in education. Here, I show that the scheduling and occurrence of marital and reproductive behavior (including both initiation and termination), as well as levels of educational attainment and investment, covary with life expectancy, even after controlling for the effects of affluence. In analyses of variation in marital, reproductive, and educational behaviors at two jurisdictional levels in Canada, life expectancy was positively correlated with patterns of age-specific fertility, age at first marriage, divorce, abortion, conferral of high school and higher education degrees (with the exception of the trades) and mean number of years of schooling. The large and highly consistent relationships observed between life expectancy and the behaviors under investigation suggest that these associations may be mediated by individual "perceptions" of life expectancy, though more research is needed before conclusions can be firmly reached.

  5. Noisy covariance matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, S.; Kondor, I.

    2002-05-01

    According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.

  6. Covariant constitutive relations and relativistic inhomogeneous plasmas

    SciTech Connect

    Gratus, J.; Tucker, R. W.

    2011-04-15

    The notion of a 2-point susceptibility kernel used to describe linear electromagnetic responses of dispersive continuous media in nonrelativistic phenomena is generalized to accommodate the constraints required of a causal formulation in spacetimes with background gravitational fields. In particular the concepts of spatial material inhomogeneity and temporal nonstationarity are formulated within a fully covariant spacetime framework. This framework is illustrated by recasting the Maxwell-Vlasov equations for a collisionless plasma in a form that exposes a 2-point electromagnetic susceptibility kernel in spacetime. This permits the establishment of a perturbative scheme for nonstationary inhomogeneous plasma configurations. Explicit formulae for the perturbed kernel are derived in both the presence and absence of gravitation using the general solution to the relativistic equations of motion of the plasma constituents. In the absence of gravitation this permits an analysis of collisionless damping in terms of a system of integral equations that reduce to standard Landau damping of Langmuir modes when the perturbation refers to a homogeneous stationary plasma configuration. It is concluded that constitutive modeling in terms of a 2-point susceptibility kernel in a covariant spacetime framework offers a natural extension of standard nonrelativistic descriptions of simple media and that its use for describing linear responses of more general dispersive media has wide applicability in relativistic plasma modeling.

  7. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  8. Non-parametric seismic hazard analysis in the presence of incomplete data

    NASA Astrophysics Data System (ADS)

    Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush

    2016-07-01

    The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.

  9. Impact of the 235U Covariance Data in Benchmark Calculations

    SciTech Connect

    Leal, Luiz C; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems.

  10. Methods and Issues for the Combined Use of Integral Experiments and Covariance Data: Results of a NEA International Collaborative Study

    SciTech Connect

    Palmiotti, Giuseppe; Salvatores, Massimo

    2014-04-01

    The Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD) established a Subgroup (called “Subgroup 33”) in 2009 on “Methods and issues for the combined use of integral experiments and covariance data.” The first stage was devoted to producing the description of different adjustment methodologies and assessing their merits. A detailed document related to this first stage has been issued. Nine leading organizations (often with a long and recognized expertise in the field) have contributed: ANL, CEA, INL, IPPE, JAEA, JSI, NRG, IRSN and ORNL. In the second stage a practical benchmark exercise was defined in order to test the reliability of the nuclear data adjustment methodology. A comparison of the results obtained by the participants and major lessons learned in the exercise are discussed in the present paper that summarizes individual contributions which often include several original developments not reported separately. The paper provides the analysis of the most important results of the adjustment of the main nuclear data of 11 major isotopes in a 33-group energy structure. This benchmark exercise was based on a set of 20 well defined integral parameters from 7 fast assembly experiments. The exercise showed that using a common shared set of integral experiments but different starting evaluated libraries and/or different covariance matrices, there is a good convergence of trends for adjustments. Moreover, a significant reduction of the original uncertainties is often observed. Using the a–posteriori covariance data, there is a strong reduction of the uncertainties of integral parameters for reference reactor designs, mainly due to the new correlations in the a–posteriori covariance matrix. Furthermore, criteria have been proposed and applied to verify the consistency of differential and integral data used in the adjustment. Finally, recommendations are given

  11. A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu

    2007-01-01

    Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…

  12. The impact of covariate measurement error on risk prediction.

    PubMed

    Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna

    2015-07-10

    In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses' Health Study. PMID:25865315

  13. Variable selection in covariate dependent random partition models: an application to urinary tract infection.

    PubMed

    Barcella, William; Iorio, Maria De; Baio, Gianluca; Malone-Lee, James

    2016-04-15

    Lower urinary tract symptoms can indicate the presence of urinary tract infection (UTI), a condition that if it becomes chronic requires expensive and time consuming care as well as leading to reduced quality of life. Detecting the presence and gravity of an infection from the earliest symptoms is then highly valuable. Typically, white blood cell (WBC) count measured in a sample of urine is used to assess UTI. We consider clinical data from 1341 patients in their first visit in which UTI (i.e. WBC ≥ 1) is diagnosed. In addition, for each patient, a clinical profile of 34 symptoms was recorded. In this paper, we propose a Bayesian nonparametric regression model based on the Dirichlet process prior aimed at providing the clinicians with a meaningful clustering of the patients based on both the WBC (response variable) and possible patterns within the symptoms profiles (covariates). This is achieved by assuming a probability model for the symptoms as well as for the response variable. To identify the symptoms most associated to UTI, we specify a spike and slab base measure for the regression coefficients: this induces dependence of symptoms selection on cluster assignment. Posterior inference is performed through Markov Chain Monte Carlo methods. PMID:26536840

  14. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  15. Nonparametric Bayesian Estimation of Censored Counter Intensity from the Indicator Data

    NASA Astrophysics Data System (ADS)

    Barat, Éric; Dautremer, Thomas; Trigano, Thomas

    2006-11-01

    The nonparametric Bayesian estimation of non homogeneous Poisson process intensity in presence of Type-I or Type-II dead times is addressed in the framework of multiplicative intensity counting processes. In addition to the counting process, the idle/dead time (on/off) process is observed. Inference is based on the partial likelihood either for non-informative (Type-I) or for informative censoring (Type-II). A Pólya tree process with suitable partition construction is proposed as nonparametric prior for the normalized multiplicative intensity. Performances are illustrated on both types of censored counters.

  16. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  17. Bayesian nonparametric regression and density estimation using integrated nested Laplace approximations.

    PubMed

    Wang, Xiao-Feng

    2013-06-25

    Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain Monte Carlo techniques, provides accurate approximations to estimate posterior marginals and avoid time-consuming sampling. We show here that two classical nonparametric smoothing problems, nonparametric regression and density estimation, can be achieved using INLA. Simulated examples and R functions are demonstrated to illustrate the use of the methods. Some discussions on potential applications of INLA are made in the paper.

  18. Robust nonparametric estimation of monotone regression functions with interval-censored observations.

    PubMed

    Zhang, Ying; Cheng, Gang; Tu, Wanzhu

    2016-09-01

    Nonparametric estimation of monotone regression functions is a classical problem of practical importance. Robust estimation of monotone regression functions in situations involving interval-censored data is a challenging yet unresolved problem. Herein, we propose a nonparametric estimation method based on the principle of isotonic regression. Using empirical process theory, we show that the proposed estimator is asymptotically consistent under a specific metric. We further conduct a simulation study to evaluate the performance of the estimator in finite sample situations. As an illustration, we use the proposed method to estimate the mean body weight functions in a group of adolescents after they reach pubertal growth spurt. PMID:26757488

  19. Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes

    PubMed Central

    Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D.

    2016-01-01

    This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. PMID:26993062

  20. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    PubMed

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. PMID:24354490

  1. Covariates of intravenous paracetamol pharmacokinetics in adults

    PubMed Central

    2014-01-01

    Background Pharmacokinetic estimates for intravenous paracetamol in individual adult cohorts are different to a certain extent, and understanding the covariates of these differences may guide dose individualization. In order to assess covariate effects of intravenous paracetamol disposition in adults, pharmacokinetic data on discrete studies were pooled. Methods This pooled analysis was based on 7 studies, resulting in 2755 time-concentration observations in 189 adults (mean age 46 SD 23 years; weight 73 SD 13 kg) given intravenous paracetamol. The effects of size, age, pregnancy and other clinical settings (intensive care, high dependency, orthopaedic or abdominal surgery) on clearance and volume of distribution were explored using non-linear mixed effects models. Results Paracetamol disposition was best described using normal fat mass (NFM) with allometric scaling as a size descriptor. A three-compartment linear disposition model revealed that the population parameter estimates (between subject variability,%) were central volume (V1) 24.6 (55.5%) L/70 kg with peripheral volumes of distribution V2 23.1 (49.6%) L/70 kg and V3 30.6 (78.9%) L/70 kg. Clearance (CL) was 16.7 (24.6%) L/h/70 kg and inter-compartment clearances were Q2 67.3 (25.7%) L/h/70 kg and Q3 2.04 (71.3%) L/h/70 kg. Clearance and V2 decreased only slightly with age. Sex differences in clearance were minor and of no significance. Clearance, relative to median values, was increased during pregnancy (FPREG = 1.14) and decreased during abdominal surgery (FABDCL = 0.715). Patients undergoing orthopaedic surgery had a reduced V2 (FORTHOV = 0.649), while those in intensive care had increased V2 (FICV = 1.51). Conclusions Size and age are important covariates for paracetamol pharmacokinetics explaining approximately 40% of clearance and V2 variability. Dose individualization in adult subpopulations would achieve little benefit in the scenarios explored. PMID:25342929

  2. Identifying sources of uncertainty using covariance analysis

    NASA Astrophysics Data System (ADS)

    Hyslop, N. P.; White, W. H.

    2010-12-01

    Atmospheric aerosol monitoring often includes performing multiple analyses on a collected sample. Some common analyses resolve suites of elements or compounds (e.g., spectrometry, chromatography). Concentrations are determined through multi-step processes involving sample collection, physical or chemical analysis, and data reduction. Uncertainties in the individual steps propagate into uncertainty in the calculated concentration. The assumption in most treatments of measurement uncertainty is that errors in the various species concentrations measured in a sample are random and therefore independent of each other. This assumption is often not valid in speciated aerosol data because some errors can be common to multiple species. For example, an error in the sample volume will introduce a common error into all species concentrations determined in the sample, and these errors will correlate with each other. Measurement programs often use paired (collocated) measurements to characterize the random uncertainty in their measurements. Suites of paired measurements provide an opportunity to go beyond the characterization of measurement uncertainties in individual species to examine correlations amongst the measurement uncertainties in multiple species. This additional information can be exploited to distinguish sources of uncertainty that affect all species from those that only affect certain subsets or individual species. Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) program are used to illustrate these ideas. Nine analytes commonly detected in the IMPROVE network were selected for this analysis. The errors in these analytes can be reasonably modeled as multiplicative, and the natural log of the ratio of concentrations measured on the two samplers provides an approximation of the error. Figure 1 shows the covariation of these log ratios among the different analytes for one site. Covariance is strongest amongst the dust element (Fe, Ca, and

  3. Using a DEA Management Tool through a Nonparametric Approach: An Examination of Urban-Rural Effects on Thai School Efficiency

    ERIC Educational Resources Information Center

    Kantabutra, Sangchan

    2009-01-01

    This paper examines urban-rural effects on public upper-secondary school efficiency in northern Thailand. In the study, efficiency was measured by a nonparametric technique, data envelopment analysis (DEA). Urban-rural effects were examined through a Mann-Whitney nonparametric statistical test. Results indicate that urban schools appear to have…

  4. Generation of phase-covariant quantum cloning

    SciTech Connect

    Karimipour, V.; Rezakhani, A.T.

    2002-11-01

    It is known that in phase-covariant quantum cloning, the equatorial states on the Bloch sphere can be cloned with a fidelity higher than the optimal bound established for universal quantum cloning. We generalize this concept to include other states on the Bloch sphere with a definite z component of spin. It is shown that once we know the z component, we can always clone a state with a fidelity higher than the universal value and that of equatorial states. We also make a detailed study of the entanglement properties of the output copies and show that the equatorial states are the only states that give rise to a separable density matrix for the outputs.

  5. Baryon Spectrum Analysis using Covariant Constraint Dynamics

    NASA Astrophysics Data System (ADS)

    Whitney, Joshua; Crater, Horace

    2012-03-01

    The energy spectrum of the baryons is determined by treating each of them as a three-body system with the interacting forces coming from a set of two-body potentials that depend on both the distance between the quarks and the spin and orbital angular momentum coupling terms. The Two Body Dirac equations of constraint dynamics derived by Crater and Van Alstine, matched with the quasipotential formalism of Todorov as the underlying two-body formalism are used, as well as the three-body constraint formalism of Sazdjian to integrate the three two-body equations into a single relativistically covariant three body equation for the bound state energies. The results are analyzed and compared to experiment using a best fit method and several different algorithms, including a gradient approach, and Monte Carlo method. Results for all well-known baryons are presented and compared to experiment, with good accuracy.

  6. Covariant Lyapunov analysis of chaotic Kolmogorov flows.

    PubMed

    Inubushi, Masanobu; Kobayashi, Miki U; Takehiro, Shin-ichi; Yamada, Michio

    2012-01-01

    Hyperbolicity is an important concept in dynamical system theory; however, we know little about the hyperbolicity of concrete physical systems including fluid motions governed by the Navier-Stokes equations. Here, we study numerically the hyperbolicity of the Navier-Stokes equation on a two-dimensional torus (Kolmogorov flows) using the method of covariant Lyapunov vectors developed by Ginelli et al. [Phys. Rev. Lett. 99, 130601 (2007)]. We calculate the angle between the local stable and unstable manifolds along an orbit of chaotic solution to evaluate the hyperbolicity. We find that the attractor of chaotic Kolmogorov flows is hyperbolic at small Reynolds numbers, but that smaller angles between the local stable and unstable manifolds are observed at larger Reynolds numbers, and the attractor appears to be nonhyperbolic at a certain Reynolds numbers. Also, we observed some relations between these hyperbolic properties and physical properties such as time correlation of the vorticity and the energy dissipation rate.

  7. EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.

    SciTech Connect

    HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.

    2007-04-22

    The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.

  8. Covariant chronogeometry and extreme distances: Elementary particles

    PubMed Central

    Segal, I. E.; Jakobsen, H. P.; Ørsted, B.; Paneitz, S. M.; Speh, B.

    1981-01-01

    We study a variant of elementary particle theory in which Minkowski space, M0, is replaced by a natural alternative, the unique four-dimensional manifold ¯M with comparable properties of causality and symmetry. Free particles are considered to be associated (i) with positive-energy representations in bundles of prescribed spin over ¯M of the group of causality-preserving transformations on ¯M (or its mass-conserving subgroup) and (ii) with corresponding wave equations. In this study these bundles, representations, and equations are detailed, and some of their basic features are developed in the cases of spins 0 and ½. Preliminaries to a general study are included; issues of covariance, unitarity, and positivity of the energy are treated; appropriate quantum numbers are indicated; and possible physical applications are discussed. PMID:16593075

  9. Covariant entropy bound and loop quantum cosmology

    SciTech Connect

    Ashtekar, Abhay; Wilson-Ewing, Edward

    2008-09-15

    We examine Bousso's covariant entropy bound conjecture in the context of radiation filled, spatially flat, Friedmann-Robertson-Walker models. The bound is violated near the big bang. However, the hope has been that quantum gravity effects would intervene and protect it. Loop quantum cosmology provides a near ideal setting for investigating this issue. For, on the one hand, quantum geometry effects resolve the singularity and, on the other hand, the wave function is sharply peaked at a quantum corrected but smooth geometry, which can supply the structure needed to test the bound. We find that the bound is respected. We suggest that the bound need not be an essential ingredient for a quantum gravity theory but may emerge from it under suitable circumstances.

  10. Covariance of Lucky Images: Performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2016-09-01

    The covariance of ground-based Lucky Images (COELI) is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper we analyze the relevance of the number of processed frames, the frames quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  11. A covariant treatment of cosmic parallax

    SciTech Connect

    Räsänen, Syksy

    2014-03-01

    The Gaia satellite will soon probe parallax on cosmological distances. Using the covariant formalism and considering the angle between a pair of sources, we find parallax for both spacelike and timelike separation between observation points. Our analysis includes both intrinsic parallax and parallax due to observer motion. We propose a consistency condition that tests the FRW metric using the parallax distance and the angular diameter distance. This test is purely kinematic and relies only on geometrical optics, it is independent of matter content and its relation to the spacetime geometry. We study perturbations around the FRW model, and find that they should be taken into account when analysing observations to determine the parallax distance.

  12. Conformal killing tensors and covariant Hamiltonian dynamics

    SciTech Connect

    Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.

    2014-12-15

    A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.

  13. Covariant density functional theory for magnetic rotation

    NASA Astrophysics Data System (ADS)

    Peng, J.; Meng, J.; Ring, P.; Zhang, S. Q.

    2008-08-01

    The tilted axis cranking formalism is implemented in relativistic mean field (RMF) theory. It is used for a microscopic description of magnetic rotation in the framework of covariant density functional theory. We assume that the rotational axis is in the xz plane and consider systems with the two symmetries P (space reflection) and PyT (a combination of a reflection in the y direction and time reversal). A computer code based on these symmetries is developed, and first applications are discussed for the nucleus Gd142: the rotational band based on the configuration πh11/22⊗νh11/2-2 is investigated in a fully microscopic and self-consistent way. The results are compared with available data, such as spectra and electromagnetic transition ratios B(M1)/B(E2). The relation between rotational velocity and angular momentum are discussed in detail together with the shears mechanism characteristic of magnetic rotation.

  14. Covariant generalization of cosmological perturbation theory

    SciTech Connect

    Enqvist, Kari; Hoegdahl, Janne; Nurmi, Sami; Vernizzi, Filippo

    2007-01-15

    We present an approach to cosmological perturbations based on a covariant perturbative expansion between two worldlines in the real inhomogeneous universe. As an application, at an arbitrary order we define an exact scalar quantity which describes the inhomogeneities in the number of e-folds on uniform density hypersurfaces and which is conserved on all scales for a barotropic ideal fluid. We derive a compact form for its conservation equation at all orders and assign it a simple physical interpretation. To make a comparison with the standard perturbation theory, we develop a method to construct gauge-invariant quantities in a coordinate system at arbitrary order, which we apply to derive the form of the nth order perturbation in the number of e-folds on uniform density hypersurfaces and its exact evolution equation. On large scales, this provides the gauge-invariant expression for the curvature perturbation on uniform density hypersurfaces and its evolution equation at any order.

  15. A covariance analysis algorithm for interconnected systems

    NASA Technical Reports Server (NTRS)

    Cheng, Victor H. L.; Curley, Robert D.; Lin, Ching-An

    1987-01-01

    A covariance analysis algorithm for propagation of signal statistics in arbitrarily interconnected nonlinear systems is presented which is applied to six-degree-of-freedom systems. The algorithm uses statistical linearization theory to linearize the nonlinear subsystems, and the resulting linearized subsystems are considered in the original interconnection framework for propagation of the signal statistics. Some nonlinearities commonly encountered in six-degree-of-freedom space-vehicle models are referred to in order to illustrate the limitations of this method, along with problems not encountered in standard deterministic simulation analysis. Moreover, the performance of the algorithm shall be numerically exhibited by comparing results using such techniques to Monte Carlo analysis results, both applied to a simple two-dimensional space-intercept problem.

  16. Multi-atom entanglement engineering and phase-covariant cloning via adiabatic passage

    NASA Astrophysics Data System (ADS)

    Zheng, Shi-Biao

    2005-05-01

    A scheme is proposed for the generation of entangled states for multiple atoms trapped in an optical cavity via adiabatic passage. In the scheme, both the atomic system and the cavity have no probability of being excited. Thus both the atomic spontaneous emission and the cavity decay are suppressed, which makes our scheme very robust against decoherence. Taking advantage of the adiabatic process, the scheme does not require accurate adjustment of the interaction time. Furthermore, the scheme is scalable in principle. The idea can also be used for realizing phase-covariant cloning.

  17. Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)

    2016-01-01

    Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.

  18. A comparison of United States and United Kingdom EQ-5D health states valuations using a nonparametric Bayesian method.

    PubMed

    Kharroubi, Samer A; O'Hagan, Anthony; Brazier, John E

    2010-07-10

    Cost-effectiveness analysis of alternative medical treatments relies on having a measure of effectiveness, and many regard the quality adjusted life year (QALY) to be the current 'gold standard.' In order to compute QALYs, we require a suitable system for describing a person's health state, and a utility measure to value the quality of life associated with each possible state. There are a number of different health state descriptive systems, and we focus here on one known as the EQ-5D. Data for estimating utilities for different health states have a number of features that mean care is necessary in statistical modelling.There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained from different countries. This article applies a nonparametric model to estimate and compare EQ-5D health state valuation data obtained from two countries using Bayesian methods. The data set is the US and UK EQ-5D valuation studies where a sample of 42 states defined by the EQ-5D was valued by representative samples of the general population from each country using the time trade-off technique. We estimate a utility function across both countries which explicitly accounts for the differences between them, and is estimated using the data from both countries. The article discusses the implications of these results for future applications of the EQ-5D and for further work in this field. PMID:20209481

  19. A Product Partition Model With Regression on Covariates

    PubMed Central

    Müller, Peter; Quintana, Fernando; Rosner, Gary L.

    2011-01-01

    We propose a probability model for random partitions in the presence of covariates. In other words, we develop a model-based clustering algorithm that exploits available covariates. The motivating application is predicting time to progression for patients in a breast cancer trial. We proceed by reporting a weighted average of the responses of clusters of earlier patients. The weights should be determined by the similarity of the new patient’s covariate with the covariates of patients in each cluster. We achieve the desired inference by defining a random partition model that includes a regression on covariates. Patients with similar covariates are a priori more likely to be clustered together. Posterior predictive inference in this model formalizes the desired prediction. We build on product partition models (PPM). We define an extension of the PPM to include a regression on covariates by including in the cohesion function a new factor that increases the probability of experimental units with similar covariates to be included in the same cluster. We discuss implementations suitable for any combination of continuous, categorical, count, and ordinal covariates. An implementation of the proposed model as R-package is available for download. PMID:21566678

  20. Performance of internal covariance estimators for cosmic shear correlation functions

    SciTech Connect

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.

  1. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGES

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  2. Testing the Hypothesis of a Homoscedastic Error Term in Simple, Nonparametric Regression

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    2006-01-01

    Consider the nonparametric regression model Y = m(X)+ [tau](X)[epsilon], where X and [epsilon] are independent random variables, [epsilon] has a median of zero and variance [sigma][squared], [tau] is some unknown function used to model heteroscedasticity, and m(X) is an unknown function reflecting some conditional measure of location associated…

  3. Do Former College Athletes Earn More at Work? A Nonparametric Assessment

    ERIC Educational Resources Information Center

    Henderson, Daniel J.; Olbrecht, Alexandre; Polachek, Solomon W.

    2006-01-01

    This paper investigates how students' collegiate athletic participation affects their subsequent labor market success. By using newly developed techniques in nonparametric regression, it shows that on average former college athletes earn a wage premium. However, the premium is not uniform, but skewed so that more than half the athletes actually…

  4. A Comparative Study of Test Data Dimensionality Assessment Procedures Under Nonparametric IRT Models

    ERIC Educational Resources Information Center

    van Abswoude, Alexandra A. H.; van der Ark, L. Andries; Sijtsma, Klaas

    2004-01-01

    In this article, an overview of nonparametric item response theory methods for determining the dimensionality of item response data is provided. Four methods were considered: MSP, DETECT, HCA/CCPROX, and DIMTEST. First, the methods were compared theoretically. Second, a simulation study was done to compare the effectiveness of MSP, DETECT, and…

  5. A Comparison of Parametric and Nonparametric Approaches to Item Analysis for Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Dunbar, Stephen B.; Kolen, Michael J.

    2004-01-01

    This study compares the parametric multiple-choice model and the nonparametric kernel smoothing approach to estimating option characteristic functions (OCCs) using an empirical criterion, the stability of curve estimates over occasions that represents random error. The potential utility of graphical OCCs in item analysis was illustrated with…

  6. The Impact of Ignoring the Level of Nesting Structure in Nonparametric Multilevel Latent Class Models

    ERIC Educational Resources Information Center

    Park, Jungkyu; Yu, Hsiu-Ting

    2016-01-01

    The multilevel latent class model (MLCM) is a multilevel extension of a latent class model (LCM) that is used to analyze nested structure data structure. The nonparametric version of an MLCM assumes a discrete latent variable at a higher-level nesting structure to account for the dependency among observations nested within a higher-level unit. In…

  7. Measuring Youth Development: A Nonparametric Cross-Country "Youth Welfare Index"

    ERIC Educational Resources Information Center

    Chaaban, Jad M.

    2009-01-01

    This paper develops an empirical methodology for the construction of a synthetic multi-dimensional cross-country comparison of the performance of governments around the world in improving the livelihood of their younger population. The devised "Youth Welfare Index" is based on the nonparametric Data Envelopment Analysis (DEA) methodology and…

  8. Nonparametric Hammerstein model based model predictive control for heart rate regulation.

    PubMed

    Su, Steven W; Huang, Shoudong; Wang, Lu; Celler, Branko G; Savkin, Andrey V; Guo, Ying; Cheng, Teddy

    2007-01-01

    This paper proposed a novel nonparametric model based model predictive control approach for the regulation of heart rate during treadmill exercise. As the model structure of human cardiovascular system is often hard to determine, nonparametric modelling is a more realistic manner to describe complex behaviours of cardiovascular system. This paper presents a new nonparametric Hammerstein model identification approach for heart rate response modelling. Based on the pseudo-random binary sequence experiment data, we decouple the identification of linear dynamic part and input nonlinearity of the Hammerstein system. Correlation analysis is applied to acquire step response of linear dynamic component. Support Vector Regression is adopted to obtain a nonparametric description of the inverse of input static nonlinearity that is utilized to form an approximate linear model of the Hammerstein system. Based on the established model, a model predictive controller under predefined speed and acceleration constraints is designed to achieve safer treadmill exercise. Simulation results show that the proposed control algorithm can achieve optimal heart rate tracking performance under predefined constraints.

  9. A Nonparametric Test for Homogeneity of Variances: Application to GPAs of Students across Academic Majors

    ERIC Educational Resources Information Center

    Bakir, Saad T.

    2010-01-01

    We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…

  10. Power Analysis of Selected Parametric and Nonparametric Tests for Heterogeneous Variances in Non-Normal Distributions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    The present investigation developed power curves for two parametric and two nonparametric procedures for testing the equality of population variances. Both normal and non-normal distributions were considered for the two group design with equal and unequal sample frequencies. The results indicated that when population distributions differed only in…

  11. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.

    PubMed

    Fan, Jianqing; Feng, Yang; Song, Rui

    2011-06-01

    A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.

  12. Combining parametric, semi-parametric, and non-parametric survival models with stacked survival models.

    PubMed

    Wey, Andrew; Connett, John; Rudser, Kyle

    2015-07-01

    For estimating conditional survival functions, non-parametric estimators can be preferred to parametric and semi-parametric estimators due to relaxed assumptions that enable robust estimation. Yet, even when misspecified, parametric and semi-parametric estimators can possess better operating characteristics in small sample sizes due to smaller variance than non-parametric estimators. Fundamentally, this is a bias-variance trade-off situation in that the sample size is not large enough to take advantage of the low bias of non-parametric estimation. Stacked survival models estimate an optimally weighted combination of models that can span parametric, semi-parametric, and non-parametric models by minimizing prediction error. An extensive simulation study demonstrates that stacked survival models consistently perform well across a wide range of scenarios by adaptively balancing the strengths and weaknesses of individual candidate survival models. In addition, stacked survival models perform as well as or better than the model selected through cross-validation. Finally, stacked survival models are applied to a well-known German breast cancer study.

  13. A Unified Nonparametric IRT Model for "d"-Dimensional Psychological Test Data ("d"-Isop)

    ERIC Educational Resources Information Center

    Scheiblechner, Hartmann

    2007-01-01

    The (univariate) isotonic psychometric (ISOP) model (Scheiblechner, 1995) is a nonparametric IRT model for dichotomous and polytomous (rating scale) psychological test data. A weak subject independence axiom W1 postulates that the subjects are ordered in the same way except for ties (i.e., similarly or isotonically) by all items of a psychological…

  14. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  15. Comparison of nonparametric trend analysis according to the types of time series data

    NASA Astrophysics Data System (ADS)

    Heo, J.; Shin, H.; Kim, T.; Jang, H.; Kim, H.

    2013-12-01

    In the analysis of hydrological data, the determination of the existence of overall trend due to climate change has been a major concern and the important part of design and management of water resources for the future. The existence of trend could be identified by plotting hydrologic time series. However, statistical methods are more accurate and objective tools to perform trend analysis. Statistical methods divided into parametric and nonparametric methods. In the case of parametric method, the population should be assumed to be normally distributed. However, most of hydrological data tend to be represented by non-normal distribution, then the nonparametric method considered more suitable than parametric method. In this study, simulations were performed with different types of time series data and four nonparametric methods (Mann-Kendall test, Spearman's rho test, SEN test, and Hotelling-Pabst test) generally used in trend analysis were applied to assess the power of each trend analysis. The time series data were classified into three types which are Trend+Random, Trend+Cycle+Random, and Trend+Non-random. In order to add a change to the data, 11 kinds of different slopes were overlapped at each simulation. As the results, nonparametric methods have almost similar power for Trend+random type and Trend+Non-random series. On the other hand, Mann-Kendall and SEN tests have slightly higher power than Spearman's rho and Hotelling-Pabst tests for Trend+Cycle+Random series.

  16. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  17. Does Private Tutoring Work? The Effectiveness of Private Tutoring: A Nonparametric Bounds Analysis

    ERIC Educational Resources Information Center

    Hof, Stefanie

    2014-01-01

    Private tutoring has become popular throughout the world. However, evidence for the effect of private tutoring on students' academic outcome is inconclusive; therefore, this paper presents an alternative framework: a nonparametric bounds method. The present examination uses, for the first time, a large representative data-set in a European…

  18. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  19. Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory

    ERIC Educational Resources Information Center

    Wells, Craig S.; Bolt, Daniel M.

    2008-01-01

    Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…

  20. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  1. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  2. Nonparametric functional data estimation applied to ozone data: prediction and extreme value analysis.

    PubMed

    Quintela-del-Río, Alejandro; Francisco-Fernández, Mario

    2011-02-01

    The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. PMID:21144549

  3. A FORTRAN program to perform the nonparametric Terpstra-Jonckheere test.

    PubMed

    Thakur, A K

    1984-01-01

    The present FORTRAN program performs the nonparametric Terpstra-Jonckheere test of significance on ordered alternatives. The program is short, simple and easy to use and can be implemented on any machine including a desk-top microcomputer. When groups are ordered, the test provides more power than the Kruskal-Wallis H-test and similar multiple comparison tests.

  4. The Probability of Exceedance as a Nonparametric Person-Fit Statistic for Tests of Moderate Length

    ERIC Educational Resources Information Center

    Tendeiro, Jorge N.; Meijer, Rob R.

    2013-01-01

    To classify an item score pattern as not fitting a nonparametric item response theory (NIRT) model, the probability of exceedance (PE) of an observed response vector x can be determined as the sum of the probabilities of all response vectors that are, at most, as likely as x, conditional on the test's total score. Vector x is to be considered…

  5. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  6. A bound for the smoothing parameter in certain well-known nonparametric density estimators

    NASA Technical Reports Server (NTRS)

    Terrell, G. R.

    1980-01-01

    Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

  7. Separating the Air Quality Impact of a Major Highway and Nearby Sources by Nonparametric Trajectory Analysis

    EPA Science Inventory

    Nonparametric Trajectory Analysis (NTA), a receptor-oriented model, was used to assess the impact of local sources of air pollution at monitoring sites located adjacent to highway I-15 in Las Vegas, NV. Measurements of black carbon, carbon monoxide, nitrogen oxides, and sulfur di...

  8. SLIT ADJUSTMENT CLAMP

    DOEpatents

    McKenzie, K.R.

    1959-07-01

    An electrode support which permits accurate alignment and adjustment of the electrode in a plurality of planes and about a plurality of axes in a calutron is described. The support will align the slits in the electrode with the slits of an ionizing chamber so as to provide for the egress of ions. The support comprises an insulator, a leveling plate carried by the insulator and having diametrically opposed attaching screws screwed to the plate and the insulator and diametrically opposed adjusting screws for bearing against the insulator, and an electrode associated with the plate for adjustment therewith.

  9. High-precision covariant one-boson-exchange potentials for np scattering below 350 MeV

    SciTech Connect

    Franz Gross; Alfred Stadler

    2007-09-10

    Using the Covariant Spectator Theory (CST), we have found One-Boson-Exchange (OBE) potentials that fit the 2006 world np data below 350 MeV with a chi2/Ndata very close to 1, for a total of 3788 data. Our potentials have significantly fewer adjustable parameters than previous high-precision potentials, and they also reproduce the experimental triton binding energy without introducing additional irreducible three-nucleon forces.

  10. Methods for estimation of covariance matrices and covariance components for the Hanford Waste Vitrification Plant Process

    SciTech Connect

    Bryan, M.F.; Piepel, G.F.; Simpson, D.B.

    1996-03-01

    The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.

  11. Covariant Spectator Theory: Foundations and Applications A Mini-Review of the Covariant Spectator Theory

    SciTech Connect

    Alfred Stadler, Franz Gross

    2010-10-01

    We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.

  12. Conditional Covariance-Based Subtest Selection for DIMTEST

    ERIC Educational Resources Information Center

    Froelich, Amy G.; Habing, Brian

    2008-01-01

    DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…

  13. Assessing Trait Covariation and Morphological Integration on Phylogenies Using Evolutionary Covariance Matrices

    PubMed Central

    Adams, Dean C.; Felice, Ryan N.

    2014-01-01

    Morphological integration describes the degree to which sets of organismal traits covary with one another. Morphological covariation may be evaluated at various levels of biological organization, but when characterizing such patterns across species at the macroevolutionary level, phylogeny must be taken into account. We outline an analytical procedure based on the evolutionary covariance matrix that allows species-level patterns of morphological integration among structures defined by sets of traits to be evaluated while accounting for the phylogenetic relationships among taxa, providing a flexible and robust complement to related phylogenetic independent contrasts based approaches. Using computer simulations under a Brownian motion model we show that statistical tests based on the approach display appropriate Type I error rates and high statistical power for detecting known levels of integration, and these trends remain consistent for simulations using different numbers of species, and for simulations that differ in the number of trait dimensions. Thus, our procedure provides a useful means of testing hypotheses of morphological integration in a phylogenetic context. We illustrate the utility of this approach by evaluating evolutionary patterns of morphological integration in head shape for a lineage of Plethodon salamanders, and find significant integration between cranial shape and mandible shape. Finally, computer code written in R for implementing the procedure is provided. PMID:24728003

  14. Analysis of the Covariance Structure of Digital Ridge Counts in the Offspring of Monozygotic Twins

    PubMed Central

    Cantor, Rita M.; Nance, Walter E.; Eaves, Lindon J.; Winter, Phyllis M.; Blanchard, Marsha M.

    1983-01-01

    Improved methods for analysis of covariance structures now permit the rigorous testing of multivariate genetic hypotheses. Using Jöreskog 's Lisrel IV computer program we have conducted a confirmatory factor analysis of dermal ridge counts on the individual fingers of 509 offspring of 107 monozygotic twin pairs. Prior to the initiation of the model-fitting procedure, the sex-adjusted ridge counts for the offspring of male and female twins were partitioned by a multivariate nested analysis of variance yielding five 10 x 10 variance-covariance matrices containing a total of 275 distinctly observed parameters with which to estimate latent sources of genetic and environmental variation and test hypotheses about the factor structure of those latent causes. To provide an adequate explanation for the observed patterns of covariation, it was necessary to include additive genetic, random environmental, epistatic and maternal effects in the model and a structure for the additive genetic effects which included a general factor and allowed for hand assymmetry and finger symmetry. The results illustrate the value of these methods for the analysis of interrelated metric traits. PMID:6682392

  15. Remotely Adjustable Hydraulic Pump

    NASA Technical Reports Server (NTRS)

    Kouns, H. H.; Gardner, L. D.

    1987-01-01

    Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.

  16. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  17. Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies

    ERIC Educational Resources Information Center

    Chen, Jianshen; Kaplan, David

    2015-01-01

    Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…

  18. Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Lee, Taehun; Cai, Li

    2012-01-01

    Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…

  19. Universal and phase-covariant superbroadcasting for mixed qubit states

    SciTech Connect

    Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara; Perinotti, Paolo

    2006-10-15

    We describe a general framework to study covariant symmetric broadcasting maps for mixed qubit states. We explicitly derive the optimal N{yields}M superbroadcasting maps, achieving optimal purification of the single-site output copy, in both the universal and phase-covariant cases. We also study the bipartite entanglement properties of the superbroadcast states.

  20. Handling Correlations between Covariates and Random Slopes in Multilevel Models

    ERIC Educational Resources Information Center

    Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders

    2014-01-01

    This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…

  1. Covariation and Quantifier Polarity: What Determines Causal Attribution in Vignettes?

    ERIC Educational Resources Information Center

    Majid, Asifa; Sanford, Anthony J.; Pickering, Martin J.

    2006-01-01

    Tests of causal attribution often use verbal vignettes, with covariation information provided through statements quantified with natural language expressions. The effect of covariation information has typically been taken to show that set size information affects attribution. However, recent research shows that quantifiers provide information…

  2. The Regression Trunk Approach to Discover Treatment Covariate Interaction

    ERIC Educational Resources Information Center

    Dusseldorp, Elise; Meulman, Jacqueline J.

    2004-01-01

    The regression trunk approach (RTA) is an integration of regression trees and multiple linear regression analysis. In this paper RTA is used to discover treatment covariate interactions, in the regression of one continuous variable on a treatment variable with "multiple" covariates. The performance of RTA is compared to the classical method of…

  3. Adjustments to the correction for attenuation.

    PubMed

    Wetcher-Hendricks, Debra

    2006-06-01

    With respect to the often-present covariance between error terms of correlated variables, D. W. Zimmerman and R. H. Williams's (1977) adjusted correction for attenuation estimates the strength of the pairwise correlation between true scores without assuming independence of error scores. This article focuses on the derivation and analysis of formulas that perform the same function for partial and part correlation coefficients. Values produced by these formulas lie closer to the actual true-score coefficient than do the observed-score coefficients or those obtained by using C. Spearman's (1904) correction for attenuation. The new versions of the formulas thus allow analysts to use hypothetical values for error-score correlations to estimate values for the partial and part correlations between true scores while disregarding the independence-of-errors assumption.

  4. Supergeometry in Locally Covariant Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Hack, Thomas-Paul; Hanisch, Florian; Schenkel, Alexander

    2016-03-01

    In this paper we analyze supergeometric locally covariant quantum field theories. We develop suitable categories SLoc of super-Cartan supermanifolds, which generalize Lorentz manifolds in ordinary quantum field theory, and show that, starting from a few representation theoretic and geometric data, one can construct a functor A : SLoc to S* Alg to the category of super-*-algebras, which can be interpreted as a non-interacting super-quantum field theory. This construction turns out to disregard supersymmetry transformations as the morphism sets in the above categories are too small. We then solve this problem by using techniques from enriched category theory, which allows us to replace the morphism sets by suitable morphism supersets that contain supersymmetry transformations as their higher superpoints. We construct super-quantum field theories in terms of enriched functors eA : eSLoc to eS* Alg between the enriched categories and show that supersymmetry transformations are appropriately described within the enriched framework. As examples we analyze the superparticle in 1|1-dimensions and the free Wess-Zumino model in 3|2-dimensions.

  5. Holographic bound in covariant loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Tamaki, Takashi

    2016-07-01

    We investigate puncture statistics based on the covariant area spectrum in loop quantum gravity. First, we consider Maxwell-Boltzmann statistics with a Gibbs factor for punctures. We establish formulas which relate physical quantities such as horizon area to the parameter characterizing holographic degrees of freedom. We also perform numerical calculations and obtain consistency with these formulas. These results tell us that the holographic bound is satisfied in the large area limit and the correction term of the entropy-area law can be proportional to the logarithm of the horizon area. Second, we also consider Bose-Einstein statistics and show that the above formulas are also useful in this case. By applying the formulas, we can understand intrinsic features of Bose-Einstein condensate which corresponds to the case when the horizon area almost consists of punctures in the ground state. When this phenomena occurs, the area is approximately constant against the parameter characterizing the temperature. When this phenomena is broken, the area shows rapid increase which suggests the phase transition from quantum to classical area.

  6. Super-sample covariance in simulations

    NASA Astrophysics Data System (ADS)

    Li, Yin; Hu, Wayne; Takada, Masahiro

    2014-04-01

    Using separate universe simulations, we accurately quantify super-sample covariance (SSC), the typically dominant sampling error for matter power spectrum estimators in a finite volume, which arises from the presence of super survey modes. By quantifying the power spectrum response to a background mode, this approach automatically captures the separate effects of beat coupling in the quasilinear regime, halo sample variance in the nonlinear regime and a new dilation effect which changes scales in the power spectrum coherently across the survey volume, including the baryon acoustic oscillation scale. It models these effects at typically the few percent level or better with a handful of small volume simulations for any survey geometry compared with directly using many thousands of survey volumes in a suite of large-volume simulations. The stochasticity of the response is sufficiently small that in the quasilinear regime, SSC can be alternately included by fitting the mean density in the volume with these fixed templates in parameter estimation. We also test the halo model prescription and find agreement typically at better than the 10% level for the response.

  7. Generalized Covariant Gyrokinetic Dynamics of Magnetoplasmas

    SciTech Connect

    Cremaschini, C.; Tessarotto, M.; Nicolini, P.; Beklemishev, A.

    2008-12-31

    A basic prerequisite for the investigation of relativistic astrophysical magnetoplasmas, occurring typically in the vicinity of massive stellar objects (black holes, neutron stars, active galactic nuclei, etc.), is the accurate description of single-particle covariant dynamics, based on gyrokinetic theory (Beklemishev et al., 1999-2005). Provided radiation-reaction effects are negligible, this is usually based on the assumption that both the space-time metric and the EM fields (in particular the magnetic field) are suitably prescribed and are considered independent of single-particle dynamics, while allowing for the possible presence of gravitational/EM perturbations driven by plasma collective interactions which may naturally arise in such systems. The purpose of this work is the formulation of a generalized gyrokinetic theory based on the synchronous variational principle recently pointed out (Tessarotto et al., 2007) which permits to satisfy exactly the physical realizability condition for the four-velocity. The theory here developed includes the treatment of nonlinear perturbations (gravitational and/or EM) characterized locally, i.e., in the rest frame of a test particle, by short wavelength and high frequency. Basic feature of the approach is to ensure the validity of the theory both for large and vanishing parallel electric field. It is shown that the correct treatment of EM perturbations occurring in the presence of an intense background magnetic field generally implies the appearance of appropriate four-velocity corrections, which are essential for the description of single-particle gyrokinetic dynamics.

  8. Epigenetic Contribution to Covariance Between Relatives

    PubMed Central

    Tal, Omri; Kisdi, Eva; Jablonka, Eva

    2010-01-01

    Recent research has pointed to the ubiquity and abundance of between-generation epigenetic inheritance. This research has implications for assessing disease risk and the responses to ecological stresses and also for understanding evolutionary dynamics. An important step toward a general evaluation of these implications is the identification and estimation of the amount of heritable, epigenetic variation in populations. While methods for modeling the phenotypic heritable variance contributed by culture have already been developed, there are no comparable methods for nonbehavioral epigenetic inheritance systems. By introducing a model that takes epigenetic transmissibility (the probability of transmission of ancestral phenotypes) and environmental induction into account, we provide novel expressions for covariances between relatives. We have combined a classical quantitative genetics approach with information about the number of opportunities for epigenetic reset between generations and assumptions about environmental induction to estimate the heritable epigenetic variance and epigenetic transmissibility for both asexual and sexual populations. This assists us in the identification of phenotypes and populations in which epigenetic transmission occurs and enables a preliminary quantification of their transmissibility, which could then be followed by genomewide association and QTL studies. PMID:20100941

  9. Application of parametric and non-parametric statistics to sounding rocket dispersion including large sample and small sample theory

    NASA Technical Reports Server (NTRS)

    Mcgarvey, J. F.

    1976-01-01

    Analytical methods for obtaining large and small samples to be used in sounding rocket dispersion statistics are described. When the distribution of the parent population is assumed known, a method is called parametric. When no assumption is made about the parent population, the method is called nonparametric. Parametric and nonparametric methods are given for both large and small samples. The assumed distribution for the parametric case will be normal and it is shown that sample nonparametric theory is easier to apply in many cases, giving essentially the same results as parametric theory. The method is applied to the dispersion of NASA sounding rockets from 1959 to 1974.

  10. Two aspects of feedforward postural control: anticipatory postural adjustments and anticipatory synergy adjustments

    PubMed Central

    Klous, Miriam; Mikulic, Pavle

    2011-01-01

    We used the framework of the uncontrolled manifold hypothesis to explore the relations between anticipatory synergy adjustments (ASAs) and anticipatory postural adjustments (APAs) during feedforward control of vertical posture. ASAs represent a drop in the index of a multimuscle-mode synergy stabilizing the coordinate of the center of pressure in preparation to an action. ASAs reflect early changes of an index of covariation among variables reflecting muscle activation, whereas APAs reflect early changes in muscle activation levels averaged across trials. The assumed purpose of ASAs is to modify stability of performance variables, whereas the purpose of APAs is to change magnitudes of those variables. We hypothesized that ASAs would be seen before APAs and that this finding would be consistent with regard to the muscle-mode composition defined on the basis of different tasks and phases of action. Subjects performed a voluntary body sway task and a quick, bilateral shoulder flexion task under self-paced and reaction time conditions. Surface muscle activity of 12 leg and trunk muscles was analyzed to identify sets of 4 muscle modes for each task and for different phases within the shoulder flexion task. Variance components in the muscle-mode space and indexes of multimuscle-mode synergy stabilizing shift of the center of pressure were computed. ASAs were seen ∼100–150 ms prior to the task initiation, before APAs. The results were consistent with respect to different sets of muscle modes defined over the two tasks and different shoulder flexion phases. We conclude that the preparation for a self-triggered postural perturbation is associated with two types of anticipatory adjustments, ASAs and APAs. They reflect different feedforward processes within the hypothetical hierarchical control scheme, resulting in changes in patterns of covariation of elemental variables and in their patterns averaged across trials, respectively. The results show that synergies

  11. Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins

    NASA Astrophysics Data System (ADS)

    Tolwinski-Ward, S. E.; Wang, D.

    2015-12-01

    Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.

  12. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis

    NASA Astrophysics Data System (ADS)

    Plis, Sergey M.; George, J. S.; Jun, S. C.; Paré-Blagoev, J.; Ranken, D. M.; Wood, C. C.; Schmidt, D. M.

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  13. Structural constraints identified with covariation analysis in ribosomal RNA.

    PubMed

    Shang, Lei; Xu, Weijia; Ozer, Stuart; Gutell, Robin R

    2012-01-01

    Covariation analysis is used to identify those positions with similar patterns of sequence variation in an alignment of RNA sequences. These constraints on the evolution of two positions are usually associated with a base pair in a helix. While mutual information (MI) has been used to accurately predict an RNA secondary structure and a few of its tertiary interactions, early studies revealed that phylogenetic event counting methods are more sensitive and provide extra confidence in the prediction of base pairs. We developed a novel and powerful phylogenetic events counting method (PEC) for quantifying positional covariation with the Gutell lab's new RNA Comparative Analysis Database (rCAD). The PEC and MI-based methods each identify unique base pairs, and jointly identify many other base pairs. In total, both methods in combination with an N-best and helix-extension strategy identify the maximal number of base pairs. While covariation methods have effectively and accurately predicted RNAs secondary structure, only a few tertiary structure base pairs have been identified. Analysis presented herein and at the Gutell lab's Comparative RNA Web (CRW) Site reveal that the majority of these latter base pairs do not covary with one another. However, covariation analysis does reveal a weaker although significant covariation between sets of nucleotides that are in proximity in the three-dimensional RNA structure. This reveals that covariation analysis identifies other types of structural constraints beyond the two nucleotides that form a base pair.

  14. Recurrence Analysis of Eddy Covariance Fluxes

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael

    2015-04-01

    The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.

  15. Structural covariance networks in the mouse brain.

    PubMed

    Pagani, Marco; Bifone, Angelo; Gozzi, Alessandro

    2016-04-01

    The presence of networks of correlation between regional gray matter volume as measured across subjects in a group of individuals has been consistently described in several human studies, an approach termed structural covariance MRI (scMRI). Complementary to prevalent brain mapping modalities like functional and diffusion-weighted imaging, the approach can provide precious insights into the mutual influence of trophic and plastic processes in health and pathological states. To investigate whether analogous scMRI networks are present in lower mammal species amenable to genetic and experimental manipulation such as the laboratory mouse, we employed high resolution morphoanatomical MRI in a large cohort of genetically-homogeneous wild-type mice (C57Bl6/J) and mapped scMRI networks using a seed-based approach. We show that the mouse brain exhibits robust homotopic scMRI networks in both primary and associative cortices, a finding corroborated by independent component analyses of cortical volumes. Subcortical structures also showed highly symmetric inter-hemispheric correlations, with evidence of distributed antero-posterior networks in diencephalic regions of the thalamus and hypothalamus. Hierarchical cluster analysis revealed six identifiable clusters of cortical and sub-cortical regions corresponding to previously described neuroanatomical systems. Our work documents the presence of homotopic cortical and subcortical scMRI networks in the mouse brain, thus supporting the use of this species to investigate the elusive biological and neuroanatomical underpinnings of scMRI network development and its derangement in neuropathological states. The identification of scMRI networks in genetically homogeneous inbred mice is consistent with the emerging view of a key role of environmental factors in shaping these correlational networks.

  16. Inflation in general covariant theory of gravity

    SciTech Connect

    Huang, Yongqing; Wang, Anzhong; Wu, Qiang E-mail: anzhong_wang@baylor.edu

    2012-10-01

    In this paper, we study inflation in the framework of the nonrelativistic general covariant theory of the Hořava-Lifshitz gravity with the projectability condition and an arbitrary coupling constant λ. We find that the Friedmann-Robterson-Walker (FRW) universe is necessarily flat in such a setup. We work out explicitly the linear perturbations of the flat FRW universe without specifying to a particular gauge, and find that the perturbations are different from those obtained in general relativity, because of the presence of the high-order spatial derivative terms. Applying the general formulas to a single scalar field, we show that in the sub-horizon regions, the metric and scalar field are tightly coupled and have the same oscillating frequencies. In the super-horizon regions, the perturbations become adiabatic, and the comoving curvature perturbation is constant. We also calculate the power spectra and indices of both the scalar and tensor perturbations, and express them explicitly in terms of the slow roll parameters and the coupling constants of the high-order spatial derivative terms. In particular, we find that the perturbations, of both scalar and tensor, are almost scale-invariant, and, with some reasonable assumptions on the coupling coefficients, the spectrum index of the tensor perturbation is the same as that given in the minimum scenario in general relativity (GR), whereas the index for scalar perturbation in general depends on λ and is different from the standard GR value. The ratio of the scalar and tensor power spectra depends on the high-order spatial derivative terms, and can be different from that of GR significantly.

  17. Hawking radiation, covariant boundary conditions, and vacuum states

    SciTech Connect

    Banerjee, Rabin; Kulkarni, Shailesh

    2009-04-15

    The basic characteristics of the covariant chiral current and the covariant chiral energy-momentum tensor are obtained from a chiral effective action. These results are used to justify the covariant boundary condition used in recent approaches of computing the Hawking flux from chiral gauge and gravitational anomalies. We also discuss a connection of our results with the conventional calculation of nonchiral currents and stress tensors in different (Unruh, Hartle-Hawking and Boulware) states.

  18. The importance of covariance in nuclear data uncertainty propagation studies

    SciTech Connect

    Benstead, J.

    2012-07-01

    A study has been undertaken to investigate what proportion of the uncertainty propagated through plutonium critical assembly calculations is due to the covariances between the fission cross section in different neutron energy groups. The uncertainties on k{sub eff} calculated show that the presence of covariances between the cross section in different neutron energy groups accounts for approximately 27-37% of the propagated uncertainty due to the plutonium fission cross section. This study also confirmed the validity of employing the sandwich equation, with associated sensitivity and covariance data, instead of a Monte Carlo sampling approach to calculating uncertainties for linearly varying systems. (authors)

  19. Non-Parametric Bayesian Human Motion Recognition Using a Single MEMS Tri-Axial Accelerometer

    PubMed Central

    Ahmed, M. Ejaz; Song, Ju Bin

    2012-01-01

    In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS) accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM) and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM) technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method. PMID:23201992

  20. A fresh look at the use of nonparametric analysis in actimetry.

    PubMed

    Gonçalves, B S B; Adamowicz, Taísa; Louzada, Fernando Mazzilli; Moreno, Claudia Roberta; Araujo, John Fontenele

    2015-04-01

    Actimetry has been used to estimate the sleep-wake cycle instead of the rest-activity rhythm. Although algorithms for assessing sleep from actimetry data exist, it is useful to analyze the rest-activity rhythm using nonparametric methods. This would then allow rest-activity rhythm stability, fragmentation and amplitude to be quantified. In addition, sleep and wakefulness efficiency can be quantified separately. These variables have been used in studies analyzing the effect of age, diseases and their respective treatments on human circadian rhythmicity. In this study, we carried out a comprehensive analysis of the main results from published articles and devised a functional model of interaction among the several components involved in generating the sleep-wake cycle. The nonparametric variables render it possible to infer the main characteristics of circadian rhythms, such as synchronization with a zeitgeber, and its amplitude and robustness. PMID:25065908

  1. Modelling SF-6D health state preference data using a nonparametric Bayesian method.

    PubMed

    Kharroubi, Samer A; Brazier, John E; Roberts, Jennifer; O'Hagan, Anthony

    2007-05-01

    This paper reports on the findings from applying a new approach to modelling health state valuation data. The approach applies a nonparametric model to estimate SF-6D health state utility values using Bayesian methods. The data set is the UK SF-6D valuation study where a sample of 249 states defined by the SF-6D (a derivative of the SF-36) was valued by a representative sample of the UK general population using standard gamble. The paper presents the results from applying the nonparametric model and comparing it to the original model estimated using a conventional parametric random effects model. The two models are compared theoretically and in terms of empirical performance. The paper discusses the implications of these results for future applications of the SF-6D and further work in this field. PMID:17069909

  2. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  3. A fresh look at the use of nonparametric analysis in actimetry.

    PubMed

    Gonçalves, B S B; Adamowicz, Taísa; Louzada, Fernando Mazzilli; Moreno, Claudia Roberta; Araujo, John Fontenele

    2015-04-01

    Actimetry has been used to estimate the sleep-wake cycle instead of the rest-activity rhythm. Although algorithms for assessing sleep from actimetry data exist, it is useful to analyze the rest-activity rhythm using nonparametric methods. This would then allow rest-activity rhythm stability, fragmentation and amplitude to be quantified. In addition, sleep and wakefulness efficiency can be quantified separately. These variables have been used in studies analyzing the effect of age, diseases and their respective treatments on human circadian rhythmicity. In this study, we carried out a comprehensive analysis of the main results from published articles and devised a functional model of interaction among the several components involved in generating the sleep-wake cycle. The nonparametric variables render it possible to infer the main characteristics of circadian rhythms, such as synchronization with a zeitgeber, and its amplitude and robustness.

  4. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  5. Simple, Internally Adjustable Valve

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.

    1990-01-01

    Valve containing simple in-line, adjustable, flow-control orifice made from ordinary plumbing fitting and two allen setscrews. Construction of valve requires only simple drilling, tapping, and grinding. Orifice installed in existing fitting, avoiding changes in rest of plumbing.

  6. Self Adjusting Sunglasses

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.

  7. Rural to Urban Adjustment

    ERIC Educational Resources Information Center

    Abramson, Jane A.

    Personal interviews with 100 former farm operators living in Saskatoon, Saskatchewan, were conducted in an attempt to understand the nature of the adjustment process caused by migration from rural to urban surroundings. Requirements for inclusion in the study were that respondents had owned or operated a farm for at least 3 years, had left their…

  8. Self adjusting inclinometer

    DOEpatents

    Hunter, Steven L.

    2002-01-01

    An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.

  9. Nonparametric simulation-based statistics for detecting linkage in general pedigrees

    SciTech Connect

    Davis, S.; Schroeder, M.; Weeks, D.E.; Goldin, L.R.

    1996-04-01

    We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds` marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. 32 refs., 5 figs., 6 tabs.

  10. Meta-Analysis of Candidate Gene Effects Using Bayesian Parametric and Non-Parametric Approaches

    PubMed Central

    Wu, Xiao-Lin; Gianola, Daniel; Rosa, Guilherme J. M.; Weigel, Kent A.

    2014-01-01

    Candidate gene (CG) approaches provide a strategy for identification and characterization of major genes underlying complex phenotypes such as production traits and susceptibility to diseases, but the conclusions tend to be inconsistent across individual studies. Meta-analysis approaches can deal with these situations, e.g., by pooling effect-size estimates or combining P values from multiple studies. In this paper, we evaluated the performance of two types of statistical models, parametric and non-parametric, for meta-analysis of CG effects using simulated data. Both models estimated a “central” effect size while taking into account heterogeneity over individual studies. The empirical distribution of study-specific CG effects was multi-modal. The parametric model assumed a normal distribution for the study-specific CG effects whereas the non-parametric model relaxed this assumption by posing a more general distribution with a Dirichlet process prior (DPP). Results indicated that the meta-analysis approaches could reduce false positive or false negative rates by pooling strengths from multiple studies, as compared to individual studies. In addition, the non-parametric, DPP model captured the variation of the “data” better than its parametric counterpart. PMID:25057320

  11. Model-free quantification of dynamic PET data using nonparametric deconvolution.

    PubMed

    Zanderigo, Francesca; Parsey, Ramin V; Ogden, R Todd

    2015-08-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test-retest clinical PET data with four reversible tracers well characterized by CMs ([(11)C]CUMI-101, [(11)C]DASB, [(11)C]PE2I, and [(11)C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test-retest performance than CMs outcomes.

  12. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  13. Nonparametric simulation-based statistics for detecting linkage in general pedigrees.

    PubMed Central

    Davis, S.; Schroeder, M.; Goldin, L. R.; Weeks, D. E.

    1996-01-01

    We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds' marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD status sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. PMID:8644751

  14. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  15. Bayesian Nonparametric Model for the Validation of Peptide Identification in Shotgun Proteomics*S⃞

    PubMed Central

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-01-01

    Tandem mass spectrometry combined with database searching allows high throughput identification of peptides in shotgun proteomics. However, validating database search results, a problem with a lot of solutions proposed, is still advancing in some aspects, such as the sensitivity, specificity, and generalizability of the validation algorithms. Here a Bayesian nonparametric (BNP) model for the validation of database search results was developed that incorporates several popular techniques in statistical learning, including the compression of feature space with a linear discriminant function, the flexible nonparametric probability density function estimation for the variable probability structure in complex problem, and the Bayesian method to calculate the posterior probability. Importantly the BNP model is compatible with the popular target-decoy database search strategy naturally. We tested the BNP model on standard proteins and real, complex sample data sets from multiple MS platforms and compared it with PeptideProphet, the cutoff-based method, and a simple nonparametric method (proposed by us previously). The performance of the BNP model was shown to be superior for all data sets searched on sensitivity and generalizability. Some high quality matches that had been filtered out by other methods were detected and assigned with high probability by the BNP model. Thus, the BNP model could be able to validate the database search results effectively and extract more information from MS/MS data. PMID:19005226

  16. Randomization, matching, and propensity scores in the design and analysis of experimental studies with measured baseline covariates.

    PubMed

    Loux, Travis M

    2015-02-20

    In many experimental situations, researchers have information on a number of covariates prior to randomization. This information can be used to balance treatment assignment with respect to these covariates as well as in the analysis of the outcome data. In this paper, we investigate the use of propensity scores in both of these roles. We also introduce a randomization procedure in which the balance of all measured covariates is approximately indexed by the variance of the empirical propensity scores and randomization is restricted to those permutations with the least variable propensity scores. This procedure is compared with recently proposed methods in terms of resulting covariate balance and estimation efficiency. Properties of the estimators resulting from each procedure are compared with estimates which incorporate the propensity score in the analysis stage. Simulation results show that analytical adjustment for the propensity score yields results on par with those obtained through restricted randomization procedures and can be used in conjunction with such procedures to further improve inferential efficiency.

  17. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  18. Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices

    PubMed Central

    Cai, Tony; Ma, Zongming; Wu, Yihong

    2014-01-01

    This paper considers a sparse spiked covariancematrix model in the high-dimensional setting and studies the minimax estimation of the covariance matrix and the principal subspace as well as the minimax rank detection. The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariance matrices. We also establish the minimax rate under the spectral norm for estimating the principal subspace, the primary object of interest in principal component analysis. In addition, the optimal rate for the rank detection boundary is obtained. This result also resolves the gap in a recent paper by Berthet and Rigollet [2] where the special case of rank one is considered. PMID:26257453

  19. Covariance Matrix Evaluations for Independent Mass Fission Yields

    SciTech Connect

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.

  20. Progress of Covariance Evaluation at the China Nuclear Data Center

    SciTech Connect

    Xu, R.; Zhang, Q.; Zhang, Y.; Liu, T.; Ge, Z.; Lu, H.; Sun, Z.; Yu, B.; Tang, G.

    2015-01-15

    Covariance evaluations at the China Nuclear Data Center focus on the cross sections of structural materials and actinides in the fast neutron energy range. In addition to the well-known Least-squares approach, a method based on the analysis of the sources of experimental uncertainties is especially introduced to generate a covariance matrix for a particular reaction for which multiple measurements are available. The scheme of the covariance evaluation flow is presented, and an example of n+{sup 90}Zr is given to illuminate the whole procedure. It is proven that the accuracy of measurements can be properly incorporated into the covariance and the long-standing small uncertainty problem can be avoided.

  1. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Harman, R. R.

    1990-01-01

    This paper presents a covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft. The linearized dynamics and measurement equations of the error states are used in formulating the 'truth model' describing the real behavior of the systems involved. The 'design model' used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A 'true covariance analysis' has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  2. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  3. The effect of mood on detection of covariation.

    PubMed

    Braverman, Julia

    2005-11-01

    The purpose of this research is to explore the effect of mood on the detection of covariation. Predictions were based on an assumption that sad moods facilitate a data-driven information elaboration style and careful data scrutinizing, whereas happy moods predispose individuals toward top-down information processing and decrease the attention given to cognitive tasks. The primary dependent variable involved is the detection of covariation between facial features and personal information and the use of this information for evaluating new target faces. The findings support the view that sad mood facilitates both conscious and unconscious detection of covariation because it increases motivation to engage in the task. Limiting available cognitive resources does not eliminate the effect of mood on the detecting of covariation.

  4. Covariance Matrix Evaluations for Independent Mass Fission Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.; Sumini, M.

    2015-01-01

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of 235U(nth, f) and 239Pu(nth, f) reactions.

  5. Nonlinear effects in the correlation of tracks and covariance propagation

    NASA Astrophysics Data System (ADS)

    Sabol, C.; Hill, K.; Alfriend, K.; Sukut, T.

    2013-03-01

    Even though there are methods for the nonlinear propagation of the covariance the propagation of the covariance in current operational programs is based on the state transition matrix of the 1st variational equations, thus it is a linear propagation. If the measurement errors are zero mean Gaussian, the orbit errors, statistically represented by the covariance, are Gaussian. When the orbit errors become too large they are no longer Gaussian and not represented by the covariance. One use of the covariance is the association of uncorrelated tracks (UCTs). A UCT is an object tracked by a space surveillance system that does not correlate to another object in the space object data base. For an object to be entered into the data base three or more tracks must be correlated. Associating UCTs is a major challenge for a space surveillance system since every object entered into the space object catalog begins as a UCT. It has been proved that if the orbit errors are Gaussian, the error ellipsoid represented by the covariance is the optimum association volume. When the time between tracks becomes large, hours or even days, the orbit errors can become large and are no longer Gaussian, and this has a negative effect on the association of UCTs. This paper further investigates the nonlinear effects on the accuracy of the covariance for use in correlation. The use of the best coordinate system and the unscented Kalman Filter (UKF) for providing a more accurate covariance are investigated along with assessing how these approaches would result in the ability to correlate tracks that are further separated in time.

  6. Bayesian latent structure models with space-time-dependent covariates.

    PubMed

    Cai, Bo; Lawson, Andrew B; Hossain, Md Monir; Choi, Jungsoon

    2012-04-01

    Spatial-temporal data requires flexible regression models which can model the dependence of responses on space- and time-dependent covariates. In this paper, we describe a semiparametric space-time model from a Bayesian perspective. Nonlinear time dependence of covariates and the interactions among the covariates are constructed by local linear and piecewise linear models, allowing for more flexible orientation and position of the covariate plane by using time-varying basis functions. Space-varying covariate linkage coefficients are also incorporated to allow for the variation of space structures across the geographical location. The formulation accommodates uncertainty in the number and locations of the piecewise basis functions to characterize the global effects, spatially structured and unstructured random effects in relation to covariates. The proposed approach relies on variable selection-type mixture priors for uncertainty in the number and locations of basis functions and in the space-varying linkage coefficients. A simulation example is presented to evaluate the performance of the proposed approach with the competing models. A real data example is used for illustration.

  7. Gaussian covariance matrices for anisotropic galaxy clustering measurements

    NASA Astrophysics Data System (ADS)

    Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio

    2016-04-01

    Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.

  8. Summary of the Workshop on Neutron Cross Section Covariances

    SciTech Connect

    Smith, Donald L.

    2008-12-15

    A Workshop on Neutron Cross Section Covariances was held from June 24-27, 2008, in Port Jefferson, New York. This Workshop was organized by the National Nuclear Data Center, Brookhaven National Laboratory, to provide a forum for reporting on the status of the growing field of neutron cross section covariances for applications and for discussing future directions of the work in this field. The Workshop focused on the following four major topical areas: covariance methodology, recent covariance evaluations, covariance applications, and user perspectives. Attention was given to the entire spectrum of neutron cross section covariance concerns ranging from light nuclei to the actinides, and from the thermal energy region to 20 MeV. The papers presented at this conference explored topics ranging from fundamental nuclear physics concerns to very specific applications in advanced reactor design and nuclear criticality safety. This paper provides a summary of this workshop. Brief comments on the highlights of each Workshop contribution are provided. In addition, a perspective on the achievements and shortcomings of the Workshop as well as on the future direction of research in this field is offered.

  9. A three domain covariance framework for EEG/MEG data.

    PubMed

    Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C

    2015-10-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets.

  10. Covariance fitting of highly-correlated data in lattice QCD

    NASA Astrophysics Data System (ADS)

    Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong

    2013-07-01

    We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.

  11. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  12. The Performance Analysis Based on SAR Sample Covariance Matrix

    PubMed Central

    Erten, Esra

    2012-01-01

    Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976

  13. Precision adjustable stage

    DOEpatents

    Cutburth, Ronald W.; Silva, Leonard L.

    1988-01-01

    An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.

  14. Adjustable vane windmills

    SciTech Connect

    Ducker, W.L.

    1982-09-14

    A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.

  15. Adjustable vane windmills

    SciTech Connect

    Ducker, W.L.

    1980-01-15

    A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.

  16. Adjustable vane windmills

    SciTech Connect

    Ducker, W.L.

    1982-09-07

    A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.

  17. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  18. Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile

    NASA Astrophysics Data System (ADS)

    Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.

    2016-04-01

    Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.

  19. Co-variables in first trimester maternal serum screening.

    PubMed

    de Graaf, I M; Cuckle, H S; Pajkrt, E; Leschot, N J; Bleker, O P; van Lith, J M

    2000-03-01

    The objective of this study was to determined the influence of maternal weight, maternal smoking habits, gravidity, parity and fetal gender on the level of maternal serum marker used in first trimester screening for Down syndrome. A total of 2449 singleton unaffected pregnancies from two centres were studied. Maternal serum free beta-human chorionic gonadotrophin (hCG) and alpha-fetoprotein (AFP) concentrations had been measured in all pregnancies, and pregnancy associated plasma protein (PAPP)-A levels had been measured in 924. All results were expressed as multiples of the gestation specific median (MoM) values after regression, using each centre's own medians. Information on maternal weight was available in 2259 pregnancies, on self-reported current cigarette smoking in 1364 (of whom 117 (8.6%) were smokers), on gravidity in 1371, parity in 1303 and fetal gender in 253. All three markers showed a statistically significant negative association with maternal weight (p<0.0005) and in the subsequent analyses MoM values were weight adjusted using standard methods. The median PAPP-A level in smokers was 0.81 MoM, a significant reduction (p<0.005); free beta-hCG was also reduced (median 0.89 MoM) but not significantly (p=0.17), and AFP was unaltered. The median AFP level in primagravidas was highly significantly greater than that in gravid women (p<0.0005). In PAPP-A the reverse effect was seen but it did not reach statistical significance (p=0.15) and there was no effect for free beta-hCG. Results of a similar magnitude and direction were found for parity. The median level of free beta-hCG was higher (p=0.0005), and the median AFP lower in female pregnancies. Maternal weight and, for PAPP-A, maternal smoking are important first trimester screening co-variables. Gravidity, parity and fetal gender also seem to influence one or more first trimester markers.

  20. Co-variables in first trimester maternal serum screening.

    PubMed

    de Graaf, I M; Cuckle, H S; Pajkrt, E; Leschot, N J; Bleker, O P; van Lith, J M

    2000-03-01

    The objective of this study was to determined the influence of maternal weight, maternal smoking habits, gravidity, parity and fetal gender on the level of maternal serum marker used in first trimester screening for Down syndrome. A total of 2449 singleton unaffected pregnancies from two centres were studied. Maternal serum free beta-human chorionic gonadotrophin (hCG) and alpha-fetoprotein (AFP) concentrations had been measured in all pregnancies, and pregnancy associated plasma protein (PAPP)-A levels had been measured in 924. All results were expressed as multiples of the gestation specific median (MoM) values after regression, using each centre's own medians. Information on maternal weight was available in 2259 pregnancies, on self-reported current cigarette smoking in 1364 (of whom 117 (8.6%) were smokers), on gravidity in 1371, parity in 1303 and fetal gender in 253. All three markers showed a statistically significant negative association with maternal weight (p<0.0005) and in the subsequent analyses MoM values were weight adjusted using standard methods. The median PAPP-A level in smokers was 0.81 MoM, a significant reduction (p<0.005); free beta-hCG was also reduced (median 0.89 MoM) but not significantly (p=0.17), and AFP was unaltered. The median AFP level in primagravidas was highly significantly greater than that in gravid women (p<0.0005). In PAPP-A the reverse effect was seen but it did not reach statistical significance (p=0.15) and there was no effect for free beta-hCG. Results of a similar magnitude and direction were found for parity. The median level of free beta-hCG was higher (p=0.0005), and the median AFP lower in female pregnancies. Maternal weight and, for PAPP-A, maternal smoking are important first trimester screening co-variables. Gravidity, parity and fetal gender also seem to influence one or more first trimester markers. PMID:10719318

  1. A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates

    PubMed Central

    Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne

    2013-01-01

    The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implemented in an R package, SpatioTemporal, available on CRAN. The model is used by the EPA funded Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) to produce estimates of ambient air pollution; MESA Air uses the estimates to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. In this paper we use the model to predict long-term average concentrations of NOx in the Los Angeles area during a ten year period. Predictions are based on measurements from the EPA Air Quality System, MESA Air specific monitoring, and output from a source dispersion model for traffic related air pollution (Caline3QHCR). Accuracy in predicting long-term average concentrations is evaluated using an elaborate cross-validation setup that accounts for a sparse spatio-temporal sampling pattern in the data, and adjusts for temporal effects. The predictive ability of the model is good with cross-validated R2 of approximately 0.7 at subject sites. Replacing four geographic covariate indicators of traffic density with the Caline3QHCR dispersion model output resulted in very similar prediction accuracy from a more parsimonious and more interpretable model. Adding traffic-related geographic covariates to the model that included Caline3QHCR did not further improve the prediction accuracy. PMID:25264424

  2. Differences in structural covariance brain networks between behavioral variant frontotemporal dementia and Alzheimer's disease.

    PubMed

    Hafkemeijer, Anne; Möller, Christiane; Dopper, Elise G P; Jiskoot, Lize C; van den Berg-Huysmans, Annette A; van Swieten, John C; van der Flier, Wiesje M; Vrenken, Hugo; Pijnenburg, Yolande A L; Barkhof, Frederik; Scheltens, Philip; van der Grond, Jeroen; Rombouts, Serge A R B

    2016-03-01

    Disease-specific patterns of gray matter atrophy in Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) overlap with distinct structural covariance networks (SCNs) in cognitively healthy controls. This suggests that both types of dementia target specific structural networks. Here, we study SCNs in AD and bvFTD. We used structural magnetic resonance imaging data of 31 AD patients, 24 bvFTD patients, and 30 controls from two centers specialized in dementia. Ten SCNs were defined based on structural covariance of gray matter density using independent component analysis. We studied group differences in SCNs using F-tests, with Bonferroni corrected t-tests, adjusted for age, gender, and study center. Associations with cognitive performance were studied using linear regression analyses. Cross-sectional group differences were found in three SCNs (all P < 0.0025). In bvFTD, we observed decreased anterior cingulate network integrity compared with AD and controls. Patients with AD showed decreased precuneal network integrity compared with bvFTD and controls, and decreased hippocampal network and anterior cingulate network integrity compared with controls. In AD, we found an association between precuneal network integrity and global cognitive performance (P = 0.0043). Our findings show that AD and bvFTD target different SCNs. The comparison of both types of dementia showed decreased precuneal (i.e., default mode) network integrity in AD and decreased anterior cingulate (i.e., salience) network integrity in bvFTD. This confirms the hypothesis that AD and bvFTD have distinct anatomical networks of degeneration and shows that structural covariance gives valuable insights in the understanding of network pathology in dementia.

  3. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  4. Covariance matching based adaptive unscented Kalman filter for direct filtering in INS/GNSS integration

    NASA Astrophysics Data System (ADS)

    Meng, Yang; Gao, Shesheng; Zhong, Yongmin; Hu, Gaoge; Subic, Aleksandar

    2016-03-01

    The use of the direct filtering approach for INS/GNSS integrated navigation introduces nonlinearity into the system state equation. As the unscented Kalman filter (UKF) is a promising method for nonlinear problems, an obvious solution is to incorporate the UKF concept in the direct filtering approach to address the nonlinearity involved in INS/GNSS integrated navigation. However, the performance of the standard UKF is dependent on the accurate statistical characterizations of system noise. If the noise distributions of inertial instruments and GNSS receivers are not appropriately described, the standard UKF will produce deteriorated or even divergent navigation solutions. This paper presents an adaptive UKF with noise statistic estimator to overcome the limitation of the standard UKF. According to the covariance matching technique, the innovation and residual sequences are used to determine the covariance matrices of the process and measurement noises. The proposed algorithm can estimate and adjust the system noise statistics online, and thus enhance the adaptive capability of the standard UKF. Simulation and experimental results demonstrate that the performance of the proposed algorithm is significantly superior to that of the standard UKF and adaptive-robust UKF under the condition without accurate knowledge on system noise, leading to improved navigation precision.

  5. Improving efficiency of inferences in randomized clinical trials using auxiliary covariates.

    PubMed

    Zhang, Min; Tsiatis, Anastasios A; Davidian, Marie

    2008-09-01

    The primary goal of a randomized clinical trial is to make comparisons among two or more treatments. For example, in a two-arm trial with continuous response, the focus may be on the difference in treatment means; with more than two treatments, the comparison may be based on pairwise differences. With binary outcomes, pairwise odds ratios or log odds ratios may be used. In general, comparisons may be based on meaningful parameters in a relevant statistical model. Standard analyses for estimation and testing in this context typically are based on the data collected on response and treatment assignment only. In many trials, auxiliary baseline covariate information may also be available, and it is of interest to exploit these data to improve the efficiency of inferences. Taking a semiparametric theory perspective, we propose a broadly applicable approach to adjustment for auxiliary covariates to achieve more efficient estimators and tests for treatment parameters in the analysis of randomized clinical trials. Simulations and applications demonstrate the performance of the methods. PMID:18190618

  6. Newton law in covariant unimodular F(R) gravity

    NASA Astrophysics Data System (ADS)

    Nojiri, S.; Odintsov, S. D.; Oikonomou, V. K.

    2016-09-01

    We investigate the Newton law in the unimodular F(R) gravity. In the standard F(R) gravity, due to the extra scalar mode, there often appear the large corrections to the Newton law and such models are excluded by the experiments and/or the observations. In the unimodular F(R) gravity, however, the extra scalar mode become not to be dynamical due to the unimodular constraint and there is not any correction to the Newton law. Even in the unimodular Einstein gravity, the Newton law is reproduced but the mechanism is a little bit different from that in the unimodular F(R) gravity. We also investigate the unimodular F(R) gravity in the covariant formulation. In the covariant formulation, we include the three-form field. We show that the three-form field could not have any unwanted property, like ghost nor correction to the Newton law. In the covariant formulation, however, the above extra scalar mode becomes dynamical and could give a correction to the Newton law. We also show that there are no difference in the Friedmann-Robertson-Walker (FRW) dynamics in the non-covariant and covariant formulation.

  7. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  8. Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection.

    PubMed

    Xu, M; Paul, M R

    2016-06-01

    We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20≲D_{λ}≲50, and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization. PMID:27415256

  9. Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Xu, M.; Paul, M. R.

    2016-06-01

    We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20 ≲Dλ≲50 , and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization.

  10. Subsea adjustable choke valves

    SciTech Connect

    Cyvas, M.K. )

    1989-08-01

    With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.

  11. SimSeq: a nonparametric approach to simulation of RNA-sequence datasets

    PubMed Central

    Benidt, Sam; Nettleton, Dan

    2015-01-01

    Motivation: RNA sequencing analysis methods are often derived by relying on hypothetical parametric models for read counts that are not likely to be precisely satisfied in practice. Methods are often tested by analyzing data that have been simulated according to the assumed model. This testing strategy can result in an overly optimistic view of the performance of an RNA-seq analysis method. Results: We develop a data-based simulation algorithm for RNA-seq data. The vector of read counts simulated for a given experimental unit has a joint distribution that closely matches the distribution of a source RNA-seq dataset provided by the user. We conduct simulation experiments based on the negative binomial distribution and our proposed nonparametric simulation algorithm. We compare performance between the two simulation experiments over a small subset of statistical methods for RNA-seq analysis available in the literature. We use as a benchmark the ability of a method to control the false discovery rate. Not surprisingly, methods based on parametric modeling assumptions seem to perform better with respect to false discovery rate control when data are simulated from parametric models rather than using our more realistic nonparametric simulation strategy. Availability and implementation: The nonparametric simulation algorithm developed in this article is implemented in the R package SimSeq, which is freely available under the GNU General Public License (version 2 or later) from the Comprehensive R Archive Network (http://cran.rproject.org/). Contact: sgbenidt@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25725090

  12. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    NASA Astrophysics Data System (ADS)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  13. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment

    PubMed Central

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration

  14. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment.

    PubMed

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H; Maurits, Natasha M

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration

  15. Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data

    NASA Astrophysics Data System (ADS)

    Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.

    2011-09-01

    Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.

  16. Non-parametric trend analysis of water quality data of rivers in Kansas

    USGS Publications Warehouse

    Yu, Y.-S.; Zou, S.; Whittemore, D.

    1993-01-01

    Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous. ?? 1993.

  17. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method.

    PubMed

    López Fontán, J L; Costa, J; Ruso, J M; Prieto, G; Sarmiento, F

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found.

  18. Comparison between scaling law and nonparametric Bayesian estimate for the recurrence time of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2009-04-01

    According to the unified scaling theory the probability distribution function of the recurrence time T is a scaled version of a base function and the average value of T can be used as a scale parameter for the distribution. The base function must belong to the scale family of distributions: tested on different catalogues and for different scale levels, for Corral (2005) the (truncated) generalized gamma distribution is the best model, for German (2006) the Weibull distribution. The scaling approach should overcome the difficulty of estimating distribution functions over small areas but theorical limitations and partial instability of the estimated distributions have been pointed out in the literature. Our aim is to analyze the recurrence time of strong earthquakes that occurred in the Italian territory. To satisfy the hypotheses of independence and identical distribution we have evaluated the times between events that occurred in each area of the Database of Individual Seismogenic Sources and then we have gathered them by eight tectonically coherent regions, each of them dominated by a well characterized geodynamic process. To solve problems like: paucity of data, presence of outliers and uncertainty in the choice of the functional expression for the distribution of t, we have followed a nonparametric approach (Rotondi (2009)) in which: (a) the maximum flexibility is obtained by assuming that the probability distribution is a random function belonging to a large function space, distributed as a stochastic process; (b) nonparametric estimation method is robust when the data contain outliers; (c) Bayesian methodology allows to exploit different information sources so that the model fitting may be good also to scarce samples. We have compared the hazard rates evaluated through the parametric and nonparametric approach. References Corral A. (2005). Mixing of rescaled data and Bayesian inference for earthquake recurrence times, Nonlin. Proces. Geophys., 12, 89

  19. Nonparametric estimation and classification using radial basis function nets and empirical risk minimization.

    PubMed

    Krzyzak, A; Linder, T; Lugosi, C

    1996-01-01

    Studies convergence properties of radial basis function (RBF) networks for a large class of basis functions, and reviews the methods and results related to this topic. The authors obtain the network parameters through empirical risk minimization. The authors show the optimal nets to be consistent in the problem of nonlinear function approximation and in nonparametric classification. For the classification problem the authors consider two approaches: the selection of the RBF classifier via nonlinear function estimation and the direct method of minimizing the empirical error probability. The tools used in the analysis include distribution-free nonasymptotic probability inequalities and covering numbers for classes of functions.

  20. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  1. Multiple imputation methods for nonparametric inference on cumulative incidence with missing cause of failure

    PubMed Central

    Lee, Minjung; Dignam, James J.; Han, Junhee

    2014-01-01

    We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107

  2. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  3. Nonparametric bayes shrinkage for assessing exposures to mixtures subject to limits of detection.

    PubMed

    Herring, Amy H

    2010-07-01

    Assessing potential associations between exposures to complex mixtures and health outcomes may be complicated by a lack of knowledge of causal components of the mixture, highly correlated mixture components, potential synergistic effects of mixture components, and difficulties in measurement. We extend recently proposed nonparametric Bayes shrinkage priors for model selection to investigations of complex mixtures by developing a formal hierarchical modeling framework to allow different degrees of shrinkage for main effects and interactions and to handle truncation of exposures at a limit of detection. The methods are used to shed light on data from a study of endometriosis and exposure to environmental polychlorinated biphenyl congeners.

  4. Identification of physiological systems: a robust method for non-parametric impulse response estimation.

    PubMed

    Westwick, D T; Kearney, R E

    1997-03-01

    The identification of non-parametric impulse response functions (IRFs) from noisy finite-length data records is analysed using the techniques of matrix perturbation theory. Based on these findings, a method for IRF estimation is developed that is more robust than existing techniques, particularly when the input is non-white. Furthermore, methods are developed for computing confidence bounds on the resulting IRF estimates. Monte Carlo simulations are used to assess the capabilities of this new method and to demonstrate its superiority over classical techniques. An application to the identification of dynamic ankle stiffness in humans is presented. PMID:9136198

  5. A sequential nonparametric pattern classification algorithm based on the Wald SPRT. [Sequential Probability Ratio Test

    NASA Technical Reports Server (NTRS)

    Poage, J. L.

    1975-01-01

    A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.

  6. Noise and speckle reduction in synthetic aperture radar imagery by nonparametric Wiener filtering.

    PubMed

    Caprari, R S; Goh, A S; Moffatt, E K

    2000-12-10

    We present a Wiener filter that is especially suitable for speckle and noise reduction in multilook synthetic aperture radar (SAR) imagery. The proposed filter is nonparametric, not being based on parametrized analytical models of signal statistics. Instead, the Wiener-Hopf equation is expressed entirely in terms of observed signal statistics, with no reference to the possibly unobservable pure signal and noise. This Wiener filter is simple in concept and implementation, exactly minimum mean-square error, and directly applicable to signal-dependent and multiplicative noise. We demonstrate the filtering of a genuine two-look SAR image and show how a nonnegatively constrained version of the filter substantially reduces ringing.

  7. Testing power-law cross-correlations: rescaled covariance test

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2013-10-01

    We introduce a new test for detection of power-law cross-correlations among a pair of time series - the rescaled covariance test. The test is based on a power-law divergence of the covariance of the partial sums of the long-range cross-correlated processes. Utilizing a heteroskedasticity and auto-correlation robust estimator of the long-term covariance, we develop a test with desirable statistical properties which is well able to distinguish between short- and long-range cross-correlations. Such test should be used as a starting point in the analysis of long-range cross-correlations prior to an estimation of bivariate long-term memory parameters. As an application, we show that the relationship between volatility and traded volume, and volatility and returns in the financial markets can be labeled as the power-law cross-correlated one.

  8. Data Covariances from R-Matrix Analyses of Light Nuclei

    SciTech Connect

    Hale, G.M. Paris, M.W.

    2015-01-15

    After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances ({sup 5}He) and with many resonances ({sup 13}C ). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.

  9. Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme

    NASA Astrophysics Data System (ADS)

    Hickmann, K. S.; Godinez, H. C.

    2015-12-01

    When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.

  10. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  11. Realistic Covariance Prediction For the Earth Science Constellations

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra

  12. Evaluation of Covariances for Actinides and Light Elements at LANL

    SciTech Connect

    Kawano, T. Talou, P.; Young, P.G.; Hale, G.; Chadwick, M.B.; Little, R.C.

    2008-12-15

    Los Alamos evaluates covariances for the evaluated nuclear data library (ENDF), mainly for actinides above the resonance region and for light elements in the entire energy range. We also develop techniques to evaluate the covariance data, like Bayesian and least-squares fitting methods, which are important to explore the uncertainty information on different types of physical quantities such as elastic scattering angular distribution, or prompt neutron fission spectra. This paper summarizes our current activities of the covariance evaluation work at LANL, including the actinide and light element data mainly for criticality safety studies and transmutation technology. The Bayesian method based on the Kalman filter technique, which combines uncertainties in the theoretical model and experimental data, is discussed.

  13. Experience in using the covariances of some ENDF/B-V dosimetry cross sections: proposed improvements and addition of cross-reaction covariances

    SciTech Connect

    Fu, C.Y.; Hetrick, D.M.

    1982-01-01

    Recent ratio data, with carefully evaluated covariances, were combined with eleven of the ENDF/B-V dosimetry cross sections using the generalized least-squares method. The purpose was to improve these evaluated cross sections and covariances, as well as to generate values for the cross-reaction covariances. The results represent improved cross sections as well as realistic and usable covariances. The latter are necessary for meaningful intergral-differential comparisons and for spectrum unfolding.

  14. Adolescent Mothers' Adjustment to Parenting.

    ERIC Educational Resources Information Center

    Samuels, Valerie Jarvis; And Others

    1994-01-01

    Examined adolescent mothers' adjustment to parenting, self-esteem, social support, and perceptions of baby. Subjects (n=52) responded to questionnaires at two time periods approximately six months apart. Mothers with higher self-esteem at Time 1 had better adjustment at Time 2. Adjustment was predicted by Time 2 variables; contact with baby's…

  15. Exploration of regularized covariance estimates with analytical shrinkage intensity for producing invertible covariance matrices in high dimensional hyperspectral data

    SciTech Connect

    Walsh, Stephen J.; Tardiff, Mark F.

    2007-10-01

    Removing background from hyperspectral scenes is a common step in the process of searching for materials of interest. Some approaches to background subtraction use spectral library data and require invertible covariance matrices for each member of the library. This is challenging because the covariance matrix can be calculated but standard methods for estimating the inverse requires that the data set for each library member have many more spectral measurements than spectral channels, which is rarely the case. An alternative approach is called shrinkage estimation. This method is investigated as an approach to providing an invertible covariance matrix estimate in the case where the number of spectral measurements is less than the number of spectral channels. The approach is an analytic method for arriving at a target matrix and the shrinkage parameter that modify the existing covariance matrix for the data to make it invertible. The theory is discussed to develop different estimates. The resulting estimates are computed and inspected on a set of hyperspectral data. This technique shows some promise for arriving at an invertible covariance estimate for small hyperspectral data sets.

  16. Abnormalities in structural covariance of cortical gyrification in schizophrenia.

    PubMed

    Palaniyappan, Lena; Park, Bert; Balain, Vijender; Dangi, Raj; Liddle, Peter

    2015-07-01

    The highly convoluted shape of the adult human brain results from several well-coordinated maturational events that start from embryonic development and extend through the adult life span. Disturbances in these maturational events can result in various neurological and psychiatric disorders, resulting in abnormal patterns of morphological relationship among cortical structures (structural covariance). Structural covariance can be studied using graph theory-based approaches that evaluate topological properties of brain networks. Covariance-based graph metrics allow cross-sectional study of coordinated maturational relationship among brain regions. Disrupted gyrification of focal brain regions is a consistent feature of schizophrenia. However, it is unclear if these localized disturbances result from a failure of coordinated development of brain regions in schizophrenia. We studied the structural covariance of gyrification in a sample of 41 patients with schizophrenia and 40 healthy controls by constructing gyrification-based networks using a 3-dimensional index. We found that several key regions including anterior insula and dorsolateral prefrontal cortex show increased segregation in schizophrenia, alongside reduced segregation in somato-sensory and occipital regions. Patients also showed a lack of prominence of the distributed covariance (hubness) of cingulate cortex. The abnormal segregated folding pattern in the right peri-sylvian regions (insula and fronto-temporal cortex) was associated with greater severity of illness. The study of structural covariance in cortical folding supports the presence of subtle deviation in the coordinated development of cortical convolutions in schizophrenia. The heterogeneity in the severity of schizophrenia could be explained in part by aberrant trajectories of neurodevelopment.

  17. Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*

    PubMed Central

    Katsevich, E.; Katsevich, A.; Singer, A.

    2015-01-01

    In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132

  18. High-precision covariant one-boson-exchange potentials for np scattering below 350 MeV

    SciTech Connect

    Franz Gross; Alfred Stadler

    2007-12-01

    All realistic potential models for the two-nucleon interaction are to some extent based on boson exchange. However, in order to achieve an essentially perfect fit to the scattering data, characterized by a chi2/Ndata~ 1, previous potentials have abandoned a pure one boson-exchange mechanism (OBE). Using a covariant theory, we have found a true OBE potential that fits the 2006 world np data below 350 MeV with a chi2/Ndata = 1.00 for 3612 data. Our potential has fewer adjustable parameters than previous high precision potentials, and also reproduces the experimental triton binding energy without introducing additional irreducible three-nucleon forces.

  19. Covariance and gauge invariance in relativistic theories of gravity

    NASA Astrophysics Data System (ADS)

    Papini, Giorgio

    2014-04-01

    Any metric theory of gravity whose interaction with quantum particles is described by a covariant wave equation is equivalent to a vector theory that satisfies Maxwell-type equations identically. This result does not depend on any particular set of field equations for the metric tensor, but only on covariance. It is derived in the linear case, but can be extended to any order of approximation in the metric deviation. In this formulation of the interaction of gravity with matter, angular momentum and momentum are conserved locally.

  20. Realization of the optimal phase-covariant quantum cloning machine

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2005-12-15

    In several quantum information (QI) phenomena of large technological importance the information is carried by the phase of the quantum superposition states, or qubits. The phase-covariant cloning machine (PQCM) addresses precisely the problem of optimally copying these qubits with the largest attainable 'fidelity'. We present a general scheme which realizes the 1{yields}3 phase covariant cloning process by a combination of three different QI processes: the universal cloning, the NOT gate, and the projection over the symmetric subspace of the output qubits. The experimental implementation of a PQCM for polarization encoded qubits, the first ever realized with photons, is reported.