Science.gov

Sample records for nonparametric covariate adjustment

  1. Nonparametric Covariate-Adjusted Association Tests Based on the Generalized Kendall’s Tau*

    PubMed Central

    Zhu, Wensheng; Jiang, Yuan; Zhang, Heping

    2012-01-01

    Identifying the risk factors for comorbidity is important in psychiatric research. Empirically, studies have shown that testing multiple, correlated traits simultaneously is more powerful than testing a single trait at a time in association analysis. Furthermore, for complex diseases, especially mental illnesses and behavioral disorders, the traits are often recorded in different scales such as dichotomous, ordinal and quantitative. In the absence of covariates, nonparametric association tests have been developed for multiple complex traits to study comorbidity. However, genetic studies generally contain measurements of some covariates that may affect the relationship between the risk factors of major interest (such as genes) and the outcomes. While it is relatively easy to adjust these covariates in a parametric model for quantitative traits, it is challenging for multiple complex traits with possibly different scales. In this article, we propose a nonparametric test for multiple complex traits that can adjust for covariate effects. The test aims to achieve an optimal scheme of adjustment by using a maximum statistic calculated from multiple adjusted test statistics. We derive the asymptotic null distribution of the maximum test statistic, and also propose a resampling approach, both of which can be used to assess the significance of our test. Simulations are conducted to compare the type I error and power of the nonparametric adjusted test to the unadjusted test and other existing adjusted tests. The empirical results suggest that our proposed test increases the power through adjustment for covariates when there exist environmental effects, and is more robust to model misspecifications than some existing parametric adjusted tests. We further demonstrate the advantage of our test by analyzing a data set on genetics of alcoholism. PMID:22745516

  2. Partial covariate adjusted regression

    PubMed Central

    Şentürk, Damla; Nguyen, Danh V.

    2008-01-01

    Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed (Şentürk and Müller, 2005). The available data has been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set. PMID:20126296

  3. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    PubMed

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology. PMID:23687472

  4. Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data

    PubMed Central

    Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.

    2009-01-01

    Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053

  5. A Simulation-Based Comparison of Covariate Adjustment Methods for the Analysis of Randomized Controlled Trials

    PubMed Central

    Chaussé, Pierre; Liu, Jin; Luta, George

    2016-01-01

    Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. PMID:27077870

  6. A Review of Nonparametric Alternatives to Analysis of Covariance.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1985-01-01

    Five distribution-free alternatives to parametric analysis of covariance are presented and demonstrated: Quade's distribution-free test, Puri and Sen's solution, McSweeney and Porter's rank transformation, Burnett and Barr's rank difference scores, and Shirley's general linear model solution. The results of simulation studies regarding Type I…

  7. A Review of Nonparametric Alternatives to Analysis of Covariance.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    Five distribution-free alternatives to parametric analysis of covariance (ANCOVA) are presented and demonstrated using a specific data example. The procedures considered are those suggested by Quade (1967); Puri and Sen (1969); McSweeney and Porter (1971); Burnett and Barr (1978); and Shirley (1981). The results of simulation studies investigating…

  8. An Investigation into the Dimensionality of TOEFL Using Conditional Covariance-Based Nonparametric Approach

    ERIC Educational Resources Information Center

    Jang, Eunice Eunhee; Roussos, Louis

    2007-01-01

    This article reports two studies to illustrate methodologies for conducting a conditional covariance-based nonparametric dimensionality assessment using data from two forms of the Test of English as a Foreign Language (TOEFL). Study 1 illustrates how to assess overall dimensionality of the TOEFL including all three subtests. Study 2 is aimed at…

  9. Covariate Adjusted Correlation Analysis with Application to FMR1 Premutation Female Carrier Data

    PubMed Central

    Şentürk, Damla; Nguyen, Danh V.; Tassone, Flora; Hagerman, Randi J.; Carroll, Raymond J.; Hagerman, Paul J.

    2009-01-01

    Summary Motivated by molecular data on female premutation carriers of the fragile X mental retardation 1 (FMR1) gene, we present a new method of covariate adjusted correlation analysis to examine the association of messenger RNA (mRNA) and number of CGG repeat expansion in the FMR1 gene. The association between the molecular variables in female carriers needs to adjust for activation ratio (ActRatio), a measure which accounts for the protective effects of one normal X chromosome in females carriers. However, there are inherent uncertainties in the exact effects of ActRatio on the molecular measures of interest. To account for these uncertainties, we develop a flexible adjustment that accommodates both additive and multiplicative effects of ActRatio nonparametrically. The proposed adjusted correlation uses local conditional correlations, which are local method of moments estimators, to estimate the Pearson correlation between two variables adjusted for a third observable covariate. The local method of moments estimators are averaged to arrive at the final covariate adjusted correlation estimator, which is shown to be consistent. We also develop a test to check the nonparametric joint additive and multiplicative adjustment form. Simulation studies illustrate the efficacy of the proposed method. (Application to FMR1 premutation data on 165 female carriers indicates that the association between mRNA and CGG repeat after adjusting for ActRatio is stronger.) Finally, the results provide independent support for a specific jointly additive and multiplicative adjustment form for ActRatio previously proposed in the literature. PMID:19173699

  10. The covariate-adjusted frequency plot.

    PubMed

    Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K

    2016-04-01

    Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter. PMID:23376964

  11. On variance estimate for covariate adjustment by propensity score analysis.

    PubMed

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  12. Inference on treatment-covariate interaction based on a nonparametric measure of treatment effects and censored survival data.

    PubMed

    Jiang, Shan; Chen, Bingshu; Tu, Dongshengn

    2016-07-20

    The investigation of the treatment-covariate interaction is of considerable interest in the design and analysis of clinical trials. With potentially censored data observed, non-parametric and semi-parametric estimates and associated confidence intervals are proposed in this paper to quantify the interactions between the treatment and a binary covariate. In addition, comparison of interactions between the treatment and two covariates are also considered. The proposed approaches are evaluated and compared by Monte Carlo simulations and applied to a real data set from a cancer clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26887976

  13. Role of Experiment Covariance in Cross Section Adjustments

    SciTech Connect

    Giuseppe Palmiotti; M. Salvatores

    2014-06-01

    This paper is dedicated to the memory of R. D. McKnight, which gave a seminal contribution in establishing methodology and rigorous approach in the evaluation of the covariance of reactor physics integral experiments. His original assessment of the ZPPR experiment uncertainties and correlations has made nuclear data adjustments, based on these experiments, much more robust and reliable. In the present paper it has been shown with some numerical examples the actual impact on an adjustment of accounting for or neglecting such correlations.

  14. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  15. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  16. Development and Validation of a Brief Version of the Dyadic Adjustment Scale With a Nonparametric Item Analysis Model

    ERIC Educational Resources Information Center

    Sabourin, Stephane; Valois, Pierre; Lussier, Yvan

    2005-01-01

    The main purpose of the current research was to develop an abbreviated form of the Dyadic Adjustment Scale (DAS) with nonparametric item response theory. The authors conducted 5 studies, with a total participation of 8,256 married or cohabiting individuals. Results showed that the item characteristic curves behaved in a monotonically increasing…

  17. Covariate-adjusted confidence interval for the intraclass correlation coefficient.

    PubMed

    Shoukri, Mohamed M; Donner, Allan; El-Dali, Abdelmoneim

    2013-09-01

    A crucial step in designing a new study is to estimate the required sample size. For a design involving cluster sampling, the appropriate sample size depends on the so-called design effect, which is a function of the average cluster size and the intracluster correlation coefficient (ICC). It is well-known that under the framework of hierarchical and generalized linear models, a reduction in residual error may be achieved by including risk factors as covariates. In this paper we show that the covariate design, indicating whether the covariates are measured at the cluster level or at the within-cluster subject level affects the estimation of the ICC, and hence the design effect. Therefore, the distinction between these two types of covariates should be made at the design stage. In this paper we use the nested-bootstrap method to assess the accuracy of the estimated ICC for continuous and binary response variables under different covariate structures. The codes of two SAS macros are made available by the authors for interested readers to facilitate the construction of confidence intervals for the ICC. Moreover, using Monte Carlo simulations we evaluate the relative efficiency of the estimators and evaluate the accuracy of the coverage probabilities of a 95% confidence interval on the population ICC. The methodology is illustrated using a published data set of blood pressure measurements taken on family members. PMID:23871746

  18. Validity of a Residualized Dependent Variable after Pretest Covariance Adjustments: Still the Same Variable?

    ERIC Educational Resources Information Center

    Nimon, Kim; Henson, Robin K.

    2015-01-01

    The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…

  19. Covariate Adjustment Strategy Increases Power in the Randomized Controlled Trial With Discrete-Time Survival Endpoints

    ERIC Educational Resources Information Center

    Safarkhani, Maryam; Moerbeek, Mirjam

    2013-01-01

    In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…

  20. Adjusting for matching and covariates in linear discriminant analysis

    PubMed Central

    Asafu-Adjei, Josephine K.; Sampson, Allan R.; Sweet, Robert A.; Lewis, David A.

    2013-01-01

    In studies that compare several diagnostic or treatment groups, subjects may not only be measured on a certain set of feature variables, but also be matched on a number of demographic characteristics and measured on additional covariates. Linear discriminant analysis (LDA) is sometimes used to identify which feature variables best discriminate among groups, while accounting for the dependencies among the feature variables. We present a new approach to LDA for multivariate normal data that accounts for the subject matching used in a particular study design, as well as covariates not used in the matching. Applications are given for post-mortem tissue data with the aim of comparing neurobiological characteristics of subjects with schizophrenia with those of normal controls, and for a post-mortem tissue primate study comparing brain biomarker measurements across three treatment groups. We also investigate the performance of our approach using a simulation study. PMID:23640791

  1. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study

    PubMed Central

    Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

    2015-01-01

    Background. Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients. PMID:26413142

  2. Slope Estimation of Covariates that Influence Renal Outcome following Renal Transplant Adjusting for Informative Right Censoring

    PubMed Central

    Jaffa, Miran A.; Jaffa, Ayad A; Lipsitz, Stuart R.

    2015-01-01

    A new statistical model is proposed to estimate population and individual slopes that are adjusted for covariates and informative right censoring. Individual slopes are assumed to have a mean that depends on the population slope for the covariates. The number of observations for each individual is modeled as a truncated discrete distribution with mean dependent on the individual subjects' slopes. Our simulation study results indicated that the associated bias and mean squared errors for the proposed model were comparable to those associated with the model that only adjusts for informative right censoring. The proposed model was illustrated using renal transplant dataset to estimate population slopes for covariates that could impact the outcome of renal function following renal transplantation. PMID:25729124

  3. On the Importance of Reliable Covariate Measurement in Selection Bias Adjustments Using Propensity Scores

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Cook, Thomas D.; Shadish, William R.

    2011-01-01

    The effect of unreliability of measurement on propensity score (PS) adjusted treatment effects has not been previously studied. The authors report on a study simulating different degrees of unreliability in the multiple covariates that were used to estimate the PS. The simulation uses the same data as two prior studies. Shadish, Clark, and Steiner…

  4. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    NASA Astrophysics Data System (ADS)

    Kermarrec, Gaël; Schön, Steffen

    2016-05-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  5. A consistent local linear estimator of the covariate adjusted correlation coefficient

    PubMed Central

    Nguyen, Danh V.; Şentürk, Damla

    2009-01-01

    Consider the correlation between two random variables (X, Y), both not directly observed. One only observes X̃ = φ1(U)X + φ2(U) and Ỹ = ψ1(U)Y + ψ2(U), where all four functions {φl(·),ψl(·), l = 1, 2} are unknown/unspecified smooth functions of an observable covariate U. We consider consistent estimation of the correlation between the unobserved variables X and Y, adjusted for the above general dual additive and multiplicative effects of U, based on the observed data (X̃, Ỹ, U). PMID:21720454

  6. Covariate-adjusted response-adaptive designs for longitudinal treatment responses: PEMF trial revisited.

    PubMed

    Biswas, Atanu; Park, Eunsik; Bhattacharya, Rahul

    2012-08-01

    Response-adaptive designs have become popular for allocation of the entering patients among two or more competing treatments in a phase III clinical trial. Although there are a lot of designs for binary treatment responses, the number of designs involving covariates is very small. Sometimes the patients give repeated responses. The only available response-adaptive allocation design for repeated binary responses is the urn design by Biswas and Dewanji [Biswas A and Dewanji AA. Randomized longitudinal play-the-winner design for repeated binary data. ANZJS 2004; 46: 675-684; Biswas A and Dewanji A. Inference for a RPW-type clinical trial with repeated monitoring for the treatment of rheumatoid arthritis. Biometr J 2004; 46: 769-779.], although it does not take care of the covariates of the patients in the allocation design. In this article, a covariate-adjusted response-adaptive randomisation procedure is developed using the log-odds ratio within the Bayesian framework for longitudinal binary responses. The small sample performance of the proposed allocation procedure is assessed through a simulation study. The proposed procedure is illustrated using some real data set. PMID:20974667

  7. Comparison of covariate adjustment methods using space-time scan statistics for food animal syndromic surveillance

    PubMed Central

    2013-01-01

    Background Abattoir condemnation data show promise as a rich source of data for syndromic surveillance of both animal and zoonotic diseases. However, inherent characteristics of abattoir condemnation data can bias results from space-time cluster detection methods for disease surveillance, and may need to be accounted for using various adjustment methods. The objective of this study was to compare the space-time scan statistics with different abilities to control for covariates and to assess their suitability for food animal syndromic surveillance. Four space-time scan statistic models were used including: animal class adjusted Poisson, space-time permutation, multi-level model adjusted Poisson, and a weighted normal scan statistic using model residuals. The scan statistics were applied to monthly bovine pneumonic lung and “parasitic liver” condemnation data from Ontario provincial abattoirs from 2001–2007. Results The number and space-time characteristics of identified clusters often varied between space-time scan tests for both “parasitic liver” and pneumonic lung condemnation data. While there were some similarities between isolated clusters in space, time and/or space-time, overall the results from space-time scan statistics differed substantially depending on the covariate adjustment approach used. Conclusions Variability in results among methods suggests that caution should be used in selecting space-time scan methods for abattoir surveillance. Furthermore, validation of different approaches with simulated or real outbreaks is required before conclusive decisions can be made concerning the best approach for conducting surveillance with these data. PMID:24246040

  8. A nonparametric stochastic method for generating daily climate-adjusted streamflows

    NASA Astrophysics Data System (ADS)

    Stagge, J. H.; Moglen, G. E.

    2013-10-01

    A daily stochastic streamflow generation model is presented, which successfully replicates statistics of the historical streamflow record and can produce climate-adjusted daily time series. A monthly climate model relates general circulation model (GCM)-scale climate indicators to discrete climate-streamflow states, which in turn control parameters in a daily streamflow generation model. Daily flow is generated by a two-state (increasing/decreasing) Markov chain, with rising limb increments randomly sampled from a Weibull distribution and the falling limb modeled as exponential recession. When applied to the Potomac River, a 38,000 km2 basin in the Mid-Atlantic United States, the model reproduces the daily, monthly, and annual distribution and dynamics of the historical streamflow record, including extreme low flows. This method can be used as part of water resources planning, vulnerability, and adaptation studies and offers the advantage of a parsimonious model, requiring only a sufficiently long historical streamflow record and large-scale climate data. Simulation of Potomac streamflows subject to the Special Report on Emissions Scenarios (SRES) A1b, A2, and B1 emission scenarios predict a slight increase in mean annual flows over the next century, with the majority of this increase occurring during the winter and early spring. Conversely, mean summer flows are projected to decrease due to climate change, caused by a shift to shorter, more sporadic rain events. Date of the minimum annual flow is projected to shift 2-5 days earlier by the 2070-2099 period.

  9. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  10. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  11. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  12. Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0

    SciTech Connect

    Yokoyama, K. Ishikawa, M.

    2015-01-15

    The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on {sup 239}Pu capture cross section alterations. For this purpose three kind of indices, called “degree of mobility,” “adjustment motive force,” and “adjustment potential,” are proposed.

  13. A-Priori and A-Posteriori Covariance Data in Nuclear Cross Section Adjustments: Issues and Challenges

    SciTech Connect

    Palmiotti, Giuseppe; Salvatores, Massimo; Aliberti, G.

    2015-01-01

    In order to provide useful feedback to evaluators a set of criteria are established for assessing the robustness and reliability of the cross section adjustments that make use of integral experiment information. Criteria are also provided for accepting the “a posteriori” cross sections, both as new “nominal” values and as “trends”. Some indications of the use of the “a posteriori” covariance matrix are indicated, even though more investigation is needed to settle this complex subject.

  14. Effects of Participation in a Post-Secondary Honors Program with Covariate Adjustment Using Propensity Score

    ERIC Educational Resources Information Center

    Furtwengler, Scott R.

    2015-01-01

    The present study sought to determine the extent to which participation in a post-secondary honors program affected academic achievement. Archival data were collected on three cohorts of high-achieving students at a large public university. Propensity scores were calculated on factors predicting participation in honors and used as the covariate.…

  15. Adjusting head circumference for covariates in autism: clinical correlates of a highly heritable continuous trait

    PubMed Central

    Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J.; Murtha, Michael T.; Hus, Vanessa; Lowe, Jennifer K.; Willsey, A. Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W.; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E.; Ledbetter, David H.; Lord, Catherine; Mane, Shrikant M.; Martin, Christa Lese; Martin, Donna M.; Morrow, Eric M.; Walsh, Christopher A.; Sutcliffe, James S.; State, Matthew W.; Devlin, Bernie; Cook, Edwin H.; Kim, Soo-Jeong

    2013-01-01

    BACKGROUND Brain development follows a different trajectory in children with Autism Spectrum Disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. METHODS We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. RESULTS Gender, age, height, weight, genetic ancestry and ASD status were significant predictors of HC (estimate of the ASD effect=0.2cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. CONCLUSIONS Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait and population norms for HC would be far more accurate if covariates including genetic ancestry, height and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. PMID:23746936

  16. Covariate adjustment of cumulative incidence functions for competing risks data using inverse probability of treatment weighting.

    PubMed

    Neumann, Anke; Billionnet, Cécile

    2016-06-01

    In observational studies without random assignment of the treatment, the unadjusted comparison between treatment groups may be misleading due to confounding. One method to adjust for measured confounders is inverse probability of treatment weighting. This method can also be used in the analysis of time to event data with competing risks. Competing risks arise if for some individuals the event of interest is precluded by a different type of event occurring before, or if only the earliest of several times to event, corresponding to different event types, is observed or is of interest. In the presence of competing risks, time to event data are often characterized by cumulative incidence functions, one for each event type of interest. We describe the use of inverse probability of treatment weighting to create adjusted cumulative incidence functions. This method is equivalent to direct standardization when the weight model is saturated. No assumptions about the form of the cumulative incidence functions are required. The method allows studying associations between treatment and the different types of event under study, while focusing on the earliest event only. We present a SAS macro implementing this method and we provide a worked example. PMID:27084321

  17. Sequential BART for imputation of missing covariates.

    PubMed

    Xu, Dandan; Daniels, Michael J; Winterstein, Almut G

    2016-07-01

    To conduct comparative effectiveness research using electronic health records (EHR), many covariates are typically needed to adjust for selection and confounding biases. Unfortunately, it is typical to have missingness in these covariates. Just using cases with complete covariates will result in considerable efficiency losses and likely bias. Here, we consider the covariates missing at random with missing data mechanism either depending on the response or not. Standard methods for multiple imputation can either fail to capture nonlinear relationships or suffer from the incompatibility and uncongeniality issues. We explore a flexible Bayesian nonparametric approach to impute the missing covariates, which involves factoring the joint distribution of the covariates with missingness into a set of sequential conditionals and applying Bayesian additive regression trees to model each of these univariate conditionals. Using data augmentation, the posterior for each conditional can be sampled simultaneously. We provide details on the computational algorithm and make comparisons to other methods, including parametric sequential imputation and two versions of multiple imputation by chained equations. We illustrate the proposed approach on EHR data from an affiliated tertiary care institution to examine factors related to hyperglycemia. PMID:26980459

  18. Nonparametric identification experiment

    NASA Technical Reports Server (NTRS)

    Yam, Yeung

    1988-01-01

    The following constitutes a summary of this paper: on-orbit identification methodology starts with nonparametric techniques for a priori system identification; development of the nonparametric identification and model determination experiment software has been completed; the validation experiments to be performed on the JPL Control and Identification Technology Validation Laboratory have been designed.

  19. Nonparametric Methods in Reliability

    PubMed Central

    Hollander, Myles; Peña, Edsel A.

    2005-01-01

    Probabilistic and statistical models for the occurrence of a recurrent event over time are described. These models have applicability in the reliability, engineering, biomedical and other areas where a series of events occurs for an experimental unit as time progresses. Nonparametric inference methods, in particular, the estimation of a relevant distribution function, are described. PMID:16710444

  20. Covariant Transform

    NASA Astrophysics Data System (ADS)

    Kisil, Vladimir V.

    2011-03-01

    Dedicated to the memory of Cora Sadosky The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H2, Banach spaces, covariant functional calculus and many others.

  1. Nonparametric conditional estimation

    SciTech Connect

    Owen, A.B.

    1987-01-01

    Many nonparametric regression techniques (such as kernels, nearest neighbors, and smoothing splines) estimate the conditional mean of Y given X = chi by a weighted sum of observed Y values, where observations with X values near chi tend to have larger weights. In this report the weights are taken to represent a finite signed measure on the space of Y values. This measure is studied as an estimate of the conditional distribution of Y given X = chi. From estimates of the conditional distribution, estimates of conditional means, standard deviations, quantiles and other statistical functionals may be computed. Chapter 1 illustrates the computation of conditional quantiles and conditional survival probabilities on the Stanford Heart Transplant data. Chapter 2 contains a survey of nonparametric regression methods and introduces statistical metrics and von Mises' method for later use. Chapter 3 proves some consistency results. Chapter 4 provides conditions under which the suitably normalized errors in estimating the conditional distribution of Y have a Brownian limit. Using von Mises' method, asymptotic normality is obtained for nonparametric conditional estimates of compactly differentiable statistical functionals.

  2. Identifying Genetic Variants for Addiction via Propensity Score Adjusted Generalized Kendall's Tau.

    PubMed

    Jiang, Yuan; Li, Ni; Zhang, Heping

    2014-01-01

    Identifying replicable genetic variants for addiction has been extremely challenging. Besides the common difficulties with genome-wide association studies (GWAS), environmental factors are known to be critical to addiction, and comorbidity is widely observed. Despite the importance of environmental factors and comorbidity for addiction study, few GWAS analyses adequately considered them due to the limitations of the existing statistical methods. Although parametric methods have been developed to adjust for covariates in association analysis, difficulties arise when the traits are multivariate because there is no ready-to-use model for them. Recent nonparametric development includes U-statistics to measure the phenotype-genotype association weighted by a similarity score of covariates. However, it is not clear how to optimize the similarity score. Therefore, we propose a semiparametric method to measure the association adjusted by covariates. In our approach, the nonparametric U-statistic is adjusted by parametric estimates of propensity scores using the idea of inverse probability weighting. The new measurement is shown to be asymptotically unbiased under our null hypothesis while the previous non-weighted and weighted ones are not. Simulation results show that our test improves power as opposed to the non-weighted and two other weighted U-statistic methods, and it is particularly powerful for detecting gene-environment interactions. Finally, we apply our proposed test to the Study of Addiction: Genetics and Environment (SAGE) to identify genetic variants for addiction. Novel genetic variants are found from our analysis, which warrant further investigation in the future. PMID:25382885

  3. Identifying Genetic Variants for Addiction via Propensity Score Adjusted Generalized Kendall’s Tau

    PubMed Central

    Jiang, Yuan; Li, Ni; Zhang, Heping

    2014-01-01

    Identifying replicable genetic variants for addiction has been extremely challenging. Besides the common difficulties with genome-wide association studies (GWAS), environmental factors are known to be critical to addiction, and comorbidity is widely observed. Despite the importance of environmental factors and comorbidity for addiction study, few GWAS analyses adequately considered them due to the limitations of the existing statistical methods. Although parametric methods have been developed to adjust for covariates in association analysis, difficulties arise when the traits are multivariate because there is no ready-to-use model for them. Recent nonparametric development includes U-statistics to measure the phenotype-genotype association weighted by a similarity score of covariates. However, it is not clear how to optimize the similarity score. Therefore, we propose a semiparametric method to measure the association adjusted by covariates. In our approach, the nonparametric U-statistic is adjusted by parametric estimates of propensity scores using the idea of inverse probability weighting. The new measurement is shown to be asymptotically unbiased under our null hypothesis while the previous non-weighted and weighted ones are not. Simulation results show that our test improves power as opposed to the non-weighted and two other weighted U-statistic methods, and it is particularly powerful for detecting gene-environment interactions. Finally, we apply our proposed test to the Study of Addiction: Genetics and Environment (SAGE) to identify genetic variants for addiction. Novel genetic variants are found from our analysis, which warrant further investigation in the future. PMID:25382885

  4. Effect modification by time-varying covariates.

    PubMed

    Robins, James M; Hernán, Miguel A; Rotnitzky, Andrea

    2007-11-01

    Marginal structural models (MSMs) allow estimation of effect modification by baseline covariates, but they are less useful for estimating effect modification by evolving time-varying covariates. Rather, structural nested models (SNMs) were specifically designed to estimate effect modification by time-varying covariates. In their paper, Petersen et al. (Am J Epidemiol 2007;166:985-993) describe history-adjusted MSMs as a generalized form of MSM and argue that history-adjusted MSMs allow a researcher to easily estimate effect modification by time-varying covariates. However, history-adjusted MSMs can result in logically incompatible parameter estimates and hence in contradictory substantive conclusions. Here the authors propose a more restrictive definition of history-adjusted MSMs than the one provided by Petersen et al. and compare the advantages and disadvantages of using history-adjusted MSMs, as opposed to SNMs, to examine effect modification by time-dependent covariates. PMID:17875581

  5. Nonparametric Bayes analysis of social science data

    NASA Astrophysics Data System (ADS)

    Kunihama, Tsuyoshi

    Social science data often contain complex characteristics that standard statistical methods fail to capture. Social surveys assign many questions to respondents, which often consist of mixed-scale variables. Each of the variables can follow a complex distribution outside parametric families and associations among variables may have more complicated structures than standard linear dependence. Therefore, it is not straightforward to develop a statistical model which can approximate structures well in the social science data. In addition, many social surveys have collected data over time and therefore we need to incorporate dynamic dependence into the models. Also, it is standard to observe massive number of missing values in the social science data. To address these challenging problems, this thesis develops flexible nonparametric Bayesian methods for the analysis of social science data. Chapter 1 briefly explains backgrounds and motivations of the projects in the following chapters. Chapter 2 develops a nonparametric Bayesian modeling of temporal dependence in large sparse contingency tables, relying on a probabilistic factorization of the joint pmf. Chapter 3 proposes nonparametric Bayes inference on conditional independence with conditional mutual information used as a measure of the strength of conditional dependence. Chapter 4 proposes a novel Bayesian density estimation method in social surveys with complex designs where there is a gap between sample and population. We correct for the bias by adjusting mixture weights in Bayesian mixture models. Chapter 5 develops a nonparametric model for mixed-scale longitudinal surveys, in which various types of variables can be induced through latent continuous variables and dynamic latent factors lead to flexibly time-varying associations among variables.

  6. Bayesian Nonparametric Models for Multiway Data Analysis.

    PubMed

    Xu, Zenglin; Yan, Feng; Qi, Yuan

    2015-02-01

    Tensor decomposition is a powerful computational tool for multiway data analysis. Many popular tensor decomposition approaches-such as the Tucker decomposition and CANDECOMP/PARAFAC (CP)-amount to multi-linear factorization. They are insufficient to model (i) complex interactions between data entities, (ii) various data types (e.g., missing data and binary data), and (iii) noisy observations and outliers. To address these issues, we propose tensor-variate latent nonparametric Bayesian models for multiway data analysis. We name these models InfTucker. These new models essentially conduct Tucker decomposition in an infinite feature space. Unlike classical tensor decomposition models, our new approaches handle both continuous and binary data in a probabilistic framework. Unlike previous Bayesian models on matrices and tensors, our models are based on latent Gaussian or t processes with nonlinear covariance functions. Moreover, on network data, our models reduce to nonparametric stochastic blockmodels and can be used to discover latent groups and predict missing interactions. To learn the models efficiently from data, we develop a variational inference technique and explore properties of the Kronecker product for computational efficiency. Compared with a classical variational implementation, this technique reduces both time and space complexities by several orders of magnitude. On real multiway and network data, our new models achieved significantly higher prediction accuracy than state-of-art tensor decomposition methods and blockmodels. PMID:26353255

  7. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  8. Addiction Severity Index Recent and Lifetime Summary Indexes Based on Nonparametric Item Response Theory Methods

    ERIC Educational Resources Information Center

    Alterman, Arthur I.; Cacciola, John S.; Habing, Brian; Lynch, Kevin G.

    2007-01-01

    Baseline Addiction Severity Index (5th ed.; ASI-5) data of 2,142 substance abuse patients were analyzed with two nonparametric item response theory (NIRT) methods: Mokken scaling and conditional covariance techniques. Nine reliable and dimensionally homogeneous Recent Problem indexes emerged in the ASI-5's seven areas, including two each in the…

  9. Conditional Covariance-based Representation of Multidimensional Test Structure.

    ERIC Educational Resources Information Center

    Bolt, Daniel M.

    2001-01-01

    Presents a new nonparametric method for constructing a spatial representation of multidimensional test structure, the Conditional Covariance-based SCALing (CCSCAL) method. Describes an index to measure the accuracy of the representation. Uses simulation and real-life data analyses to show that the method provides a suitable approximation to…

  10. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Varying Coefficient Models

    PubMed Central

    Fan, Jianqing; Ma, Yunbei; Dai, Wei

    2014-01-01

    The varying-coefficient model is an important class of nonparametric statistical model that allows us to examine how the effects of covariates vary with exposure variables. When the number of covariates is large, the issue of variable selection arises. In this paper, we propose and investigate marginal nonparametric screening methods to screen variables in sparse ultra-high dimensional varying-coefficient models. The proposed nonparametric independence screening (NIS) selects variables by ranking a measure of the nonparametric marginal contributions of each covariate given the exposure variable. The sure independent screening property is established under some mild technical conditions when the dimensionality is of nonpolynomial order, and the dimensionality reduction of NIS is quantified. To enhance the practical utility and finite sample performance, two data-driven iterative NIS methods are proposed for selecting thresholding parameters and variables: conditional permutation and greedy methods, resulting in Conditional-INIS and Greedy-INIS. The effectiveness and flexibility of the proposed methods are further illustrated by simulation studies and real data applications. PMID:25309009

  11. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Varying Coefficient Models.

    PubMed

    Fan, Jianqing; Ma, Yunbei; Dai, Wei

    2014-01-01

    The varying-coefficient model is an important class of nonparametric statistical model that allows us to examine how the effects of covariates vary with exposure variables. When the number of covariates is large, the issue of variable selection arises. In this paper, we propose and investigate marginal nonparametric screening methods to screen variables in sparse ultra-high dimensional varying-coefficient models. The proposed nonparametric independence screening (NIS) selects variables by ranking a measure of the nonparametric marginal contributions of each covariate given the exposure variable. The sure independent screening property is established under some mild technical conditions when the dimensionality is of nonpolynomial order, and the dimensionality reduction of NIS is quantified. To enhance the practical utility and finite sample performance, two data-driven iterative NIS methods are proposed for selecting thresholding parameters and variables: conditional permutation and greedy methods, resulting in Conditional-INIS and Greedy-INIS. The effectiveness and flexibility of the proposed methods are further illustrated by simulation studies and real data applications. PMID:25309009

  12. Bayesian non-parametrics and the probabilistic approach to modelling

    PubMed Central

    Ghahramani, Zoubin

    2013-01-01

    Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609

  13. FINE: fisher information nonparametric embedding.

    PubMed

    Carter, Kevin M; Raich, Raviv; Finn, William G; Hero, Alfred O

    2009-11-01

    We consider the problems of clustering, classification, and visualization of high-dimensional data when no straightforward euclidean representation exists. In this paper, we propose using the properties of information geometry and statistical manifolds in order to define similarities between data sets using the Fisher information distance. We will show that this metric can be approximated using entirely nonparametric methods, as the parameterization and geometry of the manifold is generally unknown. Furthermore, by using multidimensional scaling methods, we are able to reconstruct the statistical manifold in a low-dimensional euclidean space; enabling effective learning on the data. As a whole, we refer to our framework as Fisher Information Nonparametric Embedding (FINE) and illustrate its uses on practical problems, including a biomedical application and document classification. PMID:19762935

  14. Two general methods for population pharmacokinetic modeling: non-parametric adaptive grid and non-parametric Bayesian

    PubMed Central

    Neely, Michael; Bartroff, Jay; van Guilder, Michael; Yamada, Walter; Bayard, David; Jelliffe, Roger; Leary, Robert; Chubatiuk, Alyona; Schumitzky, Alan

    2013-01-01

    Population pharmacokinetic (PK) modeling methods can be statistically classified as either parametric or nonparametric (NP). Each classification can be divided into maximum likelihood (ML) or Bayesian (B) approazches. In this paper we discuss the nonparametric case using both maximum likelihood and Bayesian approaches. We present two nonparametric methods for estimating the unknown joint population distribution of model parameter values in a pharmacokinetic/pharmacodynamic (PK/PD) dataset. The first method is the NP Adaptive Grid (NPAG). The second is the NP Bayesian (NPB) algorithm with a stick-breaking process to construct a Dirichlet prior. Our objective is to compare the performance of these two methods using a simulated PK/PD dataset. Our results showed excellent performance of NPAG and NPB in a realistically simulated PK study. This simulation allowed us to have benchmarks in the form of the true population parameters to compare with the estimates produced by the two methods, while incorporating challenges like unbalanced sample times and sample numbers as well as the ability to include the covariate of patient weight. We conclude that both NPML and NPB can be used in realistic PK/PD population analysis problems. The advantages of one versus the other are discussed in the paper. NPAG and NPB are implemented in R and freely available for download within the Pmetrics package from www.lapk.org. PMID:23404393

  15. Statistical sirens: the allure of nonparametrics

    USGS Publications Warehouse

    Johnson, D.H.

    1995-01-01

    Although nonparametric statistical methods have a role to play in the analysis of data, often their virtues are overstated and their deficiencies overlooked. A recent Special Feature in Ecology advocated nonparametric methods because of an erroneously stated advantage that they require no assumptions regarding the distribution underlying the observations. The present paper points out some often-ignored features of nonparametric tests comparing two means, and advocates parameter estimation as a preferred alternative to hypothesis testing in many situations.

  16. Multiatlas segmentation as nonparametric regression.

    PubMed

    Awate, Suyash P; Whitaker, Ross T

    2014-09-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

  17. A Survey of Non-Exchangeable Priors for Bayesian Nonparametric Models.

    PubMed

    Foti, Nicholas J; Williamson, Sinead A

    2015-02-01

    Dependent nonparametric processes extend distributions over measures, such as the Dirichlet process and the beta process, to give distributions over collections of measures, typically indexed by values in some covariate space. Such models are appropriate priors when exchangeability assumptions do not hold, and instead we want our model to vary fluidly with some set of covariates. Since the concept of dependent nonparametric processes was formalized by MacEachern, there have been a number of models proposed and used in the statistics and machine learning literatures. Many of these models exhibit underlying similarities, an understanding of which, we hope, will help in selecting an appropriate prior, developing new models, and leveraging inference techniques. PMID:26353247

  18. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963

  19. Computerized Adaptive Testing under Nonparametric IRT Models

    ERIC Educational Resources Information Center

    Xu, Xueli; Douglas, Jeff

    2006-01-01

    Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using…

  20. Non-Parametric Collision Probability for Low-Velocity Encounters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2007-01-01

    An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.

  1. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  2. Nonparametric Methods in Molecular Biology

    PubMed Central

    Wittkowski, Knut M.; Song, Tingting

    2010-01-01

    In 2003, the completion of the Human Genome Project[1] together with advances in computational resources[2] were expected to launch an era where the genetic and genomic contributions to many common diseases would be found. In the years following, however, researchers became increasingly frustrated as most reported ‘findings’ could not be replicated in independent studies[3]. To improve the signal/noise ratio, it was suggested to increase the number of cases to be included to tens of thousands[4], a requirement that would dramatically restrict the scope of personalized medicine. Similarly, there was little success in elucidating the gene–gene interactions involved in complex diseases or even in developing criteria for assessing their phenotypes. As a partial solution to these enigmata, we here introduce a class of statistical methods as the ‘missing link’ between advances in genetics and informatics. As a first step, we provide a unifying view of a plethora of non-parametric tests developed mainly in the 1940s, all of which can be expressed as u-statistics. Then, we will extend this approach to reflect categorical and ordinal relationships between variables, resulting in a flexible and powerful approach to deal with the impact of (1) multi-allelic genetic loci, (2) poly-locus genetic regions, and (3) oligo-genetic and oligo-genomic collaborative interactions on complex phenotypes. PMID:20652502

  3. An Empirical Investigation of Four Tests for Interaction in the Context of Factorial Analysis of Covariance.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Vineyard, George

    The Type I error and power properties of the parametric F test and three nonparametric competitors were compared in terms of 3 x 4 factorial analysis of covariance layout. The focus of the study was on the test for interaction either in the presence or absence of main effects. A variety of conditional distributions, sample sizes, levels of variate…

  4. A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package

    ERIC Educational Resources Information Center

    Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.

    2013-01-01

    DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…

  5. Galilean covariant harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Horzela, Andrzej; Kapuscik, Edward

    1993-01-01

    A Galilean covariant approach to classical mechanics of a single particle is described. Within the proposed formalism, all non-covariant force laws defining acting forces which become to be defined covariantly by some differential equations are rejected. Such an approach leads out of the standard classical mechanics and gives an example of non-Newtonian mechanics. It is shown that the exactly solvable linear system of differential equations defining forces contains the Galilean covariant description of harmonic oscillator as its particular case. Additionally, it is demonstrated that in Galilean covariant classical mechanics the validity of the second Newton law of dynamics implies the Hooke law and vice versa. It is shown that the kinetic and total energies transform differently with respect to the Galilean transformations.

  6. Nonparametric survival analysis using Bayesian Additive Regression Trees (BART).

    PubMed

    Sparapani, Rodney A; Logan, Brent R; McCulloch, Robert E; Laud, Purushottam W

    2016-07-20

    Bayesian additive regression trees (BART) provide a framework for flexible nonparametric modeling of relationships of covariates to outcomes. Recently, BART models have been shown to provide excellent predictive performance, for both continuous and binary outcomes, and exceeding that of its competitors. Software is also readily available for such outcomes. In this article, we introduce modeling that extends the usefulness of BART in medical applications by addressing needs arising in survival analysis. Simulation studies of one-sample and two-sample scenarios, in comparison with long-standing traditional methods, establish face validity of the new approach. We then demonstrate the model's ability to accommodate data from complex regression models with a simulation study of a nonproportional hazards scenario with crossing survival functions and survival function estimation in a scenario where hazards are multiplicatively modified by a highly nonlinear function of the covariates. Using data from a recently published study of patients undergoing hematopoietic stem cell transplantation, we illustrate the use and some advantages of the proposed method in medical investigations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26854022

  7. Covariant mutually unbiased bases

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro

    2016-06-01

    The connection between maximal sets of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space and finite phase-space geometries is well known. In this article, we classify MUBs according to their degree of covariance with respect to the natural symmetries of a finite phase-space, which are the group of its affine symplectic transformations. We prove that there exist maximal sets of MUBs that are covariant with respect to the full group only in odd prime-power dimensional spaces, and in this case, their equivalence class is actually unique. Despite this limitation, we show that in dimension 2r covariance can still be achieved by restricting to proper subgroups of the symplectic group, that constitute the finite analogues of the oscillator group. For these subgroups, we explicitly construct the unitary operators yielding the covariance.

  8. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  9. AFCI-2.0 Library of Neutron Cross Section Covariances

    SciTech Connect

    Herman, M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.; Hoblit,S.; Mughabghab,S.F.; Sonzogni,A.; Talou,P.; Chadwick,M.B.; Hale.G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G.

    2011-06-26

    Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular new LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.

  10. A Comparison of Bias Correction Adjustments for the DETECT Procedure

    ERIC Educational Resources Information Center

    Nandakumar, Ratna; Yu, Feng; Zhang, Yanwei

    2011-01-01

    DETECT is a nonparametric methodology to identify the dimensional structure underlying test data. The associated DETECT index, "D[subscript max]," denotes the degree of multidimensionality in data. Conditional covariances (CCOV) are the building blocks of this index. In specifying population CCOVs, the latent test composite [theta][subscript TT]…

  11. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  12. Misunderstanding analysis of covariance.

    PubMed

    Miller, G A; Chapman, J P

    2001-02-01

    Despite numerous technical treatments in many venues, analysis of covariance (ANCOVA) remains a widely misused approach to dealing with substantive group differences on potential covariates, particularly in psychopathology research. Published articles reach unfounded conclusions, and some statistics texts neglect the issue. The problem with ANCOVA in such cases is reviewed. In many cases, there is no means of achieving the superficially appealing goal of "correcting" or "controlling for" real group differences on a potential covariate. In hopes of curtailing misuse of ANCOVA and promoting appropriate use, a nontechnical discussion is provided, emphasizing a substantive confound rarely articulated in textbooks and other general presentations, to complement the mathematical critiques already available. Some alternatives are discussed for contexts in which ANCOVA is inappropriate or questionable. PMID:11261398

  13. Nonparametric Methods Instruction in Quantitative Geology.

    ERIC Educational Resources Information Center

    Kemmerly, Phillip Randall

    1990-01-01

    Presented is an approach to introducing upper division, undergraduate geology students to nonparametric statistics and their application to geologic data. Discussed are the use of the Mann-Whitney U and the Kolmogorov-Smirnov tests and a class assignment which illustrates their use. (CW)

  14. A Comparison of Parametric versus Nonparametric Statistics.

    ERIC Educational Resources Information Center

    Royeen, Charlotte Brasic

    In order to examine the possible effects of violation of assumptions using parametric procedures, this study is an exploratory investigation into the use of parametric versus nonparametric procedures using a multiple case study design. The case study investigation guidelines outlined by Yin served as the methodology. The following univariate…

  15. Nonparametric analysis of high wind speed data

    NASA Astrophysics Data System (ADS)

    Francisco-Fernández, Mario; Quintela-del-Río, Alejandro

    2013-01-01

    In this paper, nonparametric curve estimation methods are applied to analyze time series of wind speeds, focusing on the extreme events exceeding a chosen threshold. Classical parametric statistical approaches in this context consist in fitting a generalized Pareto distribution (GPD) to the tail of the empirical cumulative distribution, using maximum likelihood or the method of the moments to estimate the parameters of this distribution. Additionally, confidence intervals are usually computed to assess the uncertainty of the estimates. Nonparametric methods to estimate directly some quantities of interest, such as the probability of exceedance, the quantiles or return levels, or the return periods, are proposed. Moreover, bootstrap techniques are used to develop pointwise and simultaneous confidence intervals for these functions. The proposed models are applied to wind speed data in the Gulf Coast of US, comparing the results with those using the GPD approach, by means of a split-sample test. Results show that nonparametric methods are competitive with respect to the standard GPD approximations. The study is completed generating synthetic data sets and comparing the behavior of the parametric and the nonparametric estimates in this framework.

  16. How Are Teachers Teaching? A Nonparametric Approach

    ERIC Educational Resources Information Center

    De Witte, Kristof; Van Klaveren, Chris

    2014-01-01

    This paper examines which configuration of teaching activities maximizes student performance. For this purpose a nonparametric efficiency model is formulated that accounts for (1) self-selection of students and teachers in better schools and (2) complementary teaching activities. The analysis distinguishes both individual teaching (i.e., a…

  17. The covariant chiral ring

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Troost, Jan

    2016-03-01

    We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N = (4 , 4) supersymmetry in two dimensions. For seed target spaces K3 and T 4, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.

  18. A Basic Computer Program for Calculating Simultaneous Pairwise Comparisons in Analysis of Covariance.

    ERIC Educational Resources Information Center

    Powers, Stephen; Jones, Patricia

    1986-01-01

    This paper describes a computer program which tests all pairwise comparisons of adjusted means in analysis of covariance by using Tukey-Kramer Test. The program contains: means of covariate, adjusted means of the criterion measure, sample size, mean square error, and the desired percentile point on the Studentized range distribution. (JAZ)

  19. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  20. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  1. An Evaluation of Parametric and Nonparametric Models of Fish Population Response.

    SciTech Connect

    Haas, Timothy C.; Peterson, James T.; Lee, Danny C.

    1999-11-01

    Predicting the distribution or status of animal populations at large scales often requires the use of broad-scale information describing landforms, climate, vegetation, etc. These data, however, often consist of mixtures of continuous and categorical covariates and nonmultiplicative interactions among covariates, complicating statistical analyses. Using data from the interior Columbia River Basin, USA, we compared four methods for predicting the distribution of seven salmonid taxa using landscape information. Subwatersheds (mean size, 7800 ha) were characterized using a set of 12 covariates describing physiography, vegetation, and current land-use. The techniques included generalized logit modeling, classification trees, a nearest neighbor technique, and a modular neural network. We evaluated model performance using out-of-sample prediction accuracy via leave-one-out cross-validation and introduce a computer-intensive Monte Carlo hypothesis testing approach for examining the statistical significance of landscape covariates with the non-parametric methods. We found the modular neural network and the nearest-neighbor techniques to be the most accurate, but were difficult to summarize in ways that provided ecological insight. The modular neural network also required the most extensive computer resources for model fitting and hypothesis testing. The generalized logit models were readily interpretable, but were the least accurate, possibly due to nonlinear relationships and nonmultiplicative interactions among covariates. Substantial overlap among the statistically significant (P<0.05) covariates for each method suggested that each is capable of detecting similar relationships between responses and covariates. Consequently, we believe that employing one or more methods may provide greater biological insight without sacrificing prediction accuracy.

  2. Using Analysis of Covariance (ANCOVA) with Fallible Covariates

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Aguinis, Herman

    2011-01-01

    Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…

  3. Nonparametric Bayesian Variable Selection With Applications to Multiple Quantitative Trait Loci Mapping With Epistasis and Gene–Environment Interaction

    PubMed Central

    Zou, Fei; Huang, Hanwen; Lee, Seunggeun; Hoeschele, Ina

    2010-01-01

    The joint action of multiple genes is an important source of variation for complex traits and human diseases. However, mapping genes with epistatic effects and gene–environment interactions is a difficult problem because of relatively small sample sizes and very large parameter spaces for quantitative trait locus models that include such interactions. Here we present a nonparametric Bayesian method to map multiple quantitative trait loci (QTL) by considering epistatic and gene–environment interactions. The proposed method is not restricted to pairwise interactions among genes, as is typically done in parametric QTL analysis. Rather than modeling each main and interaction term explicitly, our nonparametric Bayesian method measures the importance of each QTL, irrespective of whether it is mostly due to a main effect or due to some interaction effect(s), via an unspecified function of the genotypes at all candidate QTL. A Gaussian process prior is assigned to this unknown function. In addition to the candidate QTL, nongenetic factors and covariates, such as age, gender, and environmental conditions, can also be included in the unspecified function. The importance of each genetic factor (QTL) and each nongenetic factor/covariate included in the function is estimated by a single hyperparameter, which enters the covariance function and captures any main or interaction effect associated with a given factor/covariate. An initial evaluation of the performance of the proposed method is obtained via analysis of simulated and real data. PMID:20551445

  4. Adaptive Confidence Bands for Nonparametric Regression Functions

    PubMed Central

    Cai, T. Tony; Low, Mark; Ma, Zongming

    2014-01-01

    A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661

  5. Nonparametric estimation with recurrent competing risks data

    PubMed Central

    Peña, Edsel A.

    2014-01-01

    Nonparametric estimators of component and system life distributions are developed and presented for situations where recurrent competing risks data from series systems are available. The use of recurrences of components’ failures leads to improved efficiencies in statistical inference, thereby leading to resource-efficient experimental or study designs or improved inferences about the distributions governing the event times. Finite and asymptotic properties of the estimators are obtained through simulation studies and analytically. The detrimental impact of parametric model misspecification is also vividly demonstrated, lending credence to the virtue of adopting nonparametric or semiparametric models, especially in biomedical settings. The estimators are illustrated by applying them to a data set pertaining to car repairs for vehicles that were under warranty. PMID:24072583

  6. A nonparametric software reliability growth model

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.; Sofer, Ariela

    1988-01-01

    Miller and Sofer have presented a nonparametric method for estimating the failure rate of a software program. The method is based on the complete monotonicity property of the failure rate function, and uses a regression approach to obtain estimates of the current software failure rate. This completely monotone software model is extended. It is shown how it can also provide long-range predictions of future reliability growth. Preliminary testing indicates that the method is competitive with parametric approaches, while being more robust.

  7. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  8. Covariant deformed oscillator algebras

    NASA Technical Reports Server (NTRS)

    Quesne, Christiane

    1995-01-01

    The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.

  9. The Bayesian Covariance Lasso

    PubMed Central

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G.

    2012-01-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data. PMID:24551316

  10. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  11. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  12. Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials

    DOE Data Explorer

    The Low-fidelity Covariance Project (Low-Fi) was funded in FY07-08 by DOEÆs Nuclear Criticality Safety Program (NCSP). The project was a collaboration among ANL, BNL, LANL, and ORNL. The motivation for the Low-Fi project stemmed from an imbalance in supply and demand of covariance data. The interest in, and demand for, covariance data has been in a continual uptrend over the past few years. Requirements to understand application-dependent uncertainties in simulated quantities of interest have led to the development of sensitivity / uncertainty and data adjustment software such as TSUNAMI [1] at Oak Ridge. To take full advantage of the capabilities of TSUNAMI requires general availability of covariance data. However, the supply of covariance data has not been able to keep up with the demand. This fact is highlighted by the observation that the recent release of the much-heralded ENDF/B-VII.0 included covariance data for only 26 of the 393 neutron evaluations (which is, in fact, considerably less covariance data than was included in the final ENDF/B-VI release).[Copied from R.C. Little et al., "Low-Fidelity Covariance Project", Nuclear Data Sheets 109 (2008) 2828-2833] The Low-Fi covariance data are now available at the National Nuclear Data Center. They are separate from ENDF/B-VII.0 and the NNDC warns that this information is not approved by CSEWG. NNDC describes the contents of this collection as: "Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn

  13. Covariant magnetic connection hypersurfaces

    NASA Astrophysics Data System (ADS)

    Pegoraro, F.

    2016-04-01

    > In the single fluid, non-relativistic, ideal magnetohydrodynamic (MHD) plasma description, magnetic field lines play a fundamental role by defining dynamically preserved `magnetic connections' between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D magnetic connection hypersurfaces in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when .

  14. Flexible estimation of covariance function by penalized spline with application to longitudinal family data

    PubMed Central

    Wang, Yuanjia

    2011-01-01

    Longitudinal data are routinely collected in biomedical research studies. A natural model describing longitudinal data decomposes an individual’s outcome as the sum of a population mean function and random subject-specific deviations. When parametric assumptions are too restrictive, methods modeling the population mean function and the random subject-specific functions nonparametrically are in demand. In some applications, it is desirable to estimate a covariance function of random subject-specific deviations. In this work, flexible yet computationally efficient methods are developed for a general class of semiparametric mixed effects models, where the functional forms of the population mean and the subject-specific curves are unspecified. We estimate nonparametric components of the model by penalized spline (P-spline, [1]), and reparametrize the random curve covariance function by a modified Cholesky decomposition [2] which allows for unconstrained estimation of a positive semidefinite matrix. To provide smooth estimates, we penalize roughness of fitted curves and derive closed form solutions in the maximization step of an EM algorithm. In addition, we present models and methods for longitudinal family data where subjects in a family are correlated and we decompose the covariance function into a subject-level source and observation-level source. We apply these methods to the multi-level Framingham Heart Study data to estimate age-specific heritability of systolic blood pressure (SBP) nonparametrically. PMID:21491474

  15. Nonparametric, nonnegative deconvolution of large time series

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.

    2006-12-01

    There is a long tradition of characterizing hydrologic systems by linear models, in which the response of the system to a time-varying stimulus is computed by convolution of a system-specific transfer function with the input signal. Despite its limitations, the transfer-function concept has been shown valuable for many situations such as the precipitation/run-off relationships of catchments and solute transport in agricultural soils and aquifers. A practical difficulty lies in the identification of the transfer function. A common approach is to fit a parametric function, enforcing a particular shape of the transfer function, which may be in contradiction to the real behavior (e.g., multimodal transfer functions, long tails, etc.). In our nonparametric deconvolution, the transfer function is assumed an auto-correlated random time function, which is conditioned on the data by a Bayesian approach. Nonnegativity, which is a vital constraint for solute-transport applications, is enforced by the method of Lagrange multipliers. This makes the inverse problem nonlinear. In nonparametric deconvolution, identifying the auto-correlation parameters is crucial. Enforcing too much smoothness prohibits the identification of important features, whereas insufficient smoothing leads to physically meaningless transfer functions, mapping noise components in the two data series onto each other. We identify optimal smoothness parameters by the expectation-maximization method, which requires the repeated generation of many conditional realizations. The overall approach, however, is still significantly faster than Markov-Chain Monte-Carlo methods presented recently. We apply our approach to electric-conductivity time series measured in a river and monitoring wells in the adjacent aquifer. The data cover 1.5 years with a temporal resolution of 1h. The identified transfer functions have lengths of up to 60 days, making up 1440 parameters. We believe that nonparametric deconvolution is an

  16. Lottery spending: a non-parametric analysis.

    PubMed

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699

  17. Bayesian Nonparametric Inference – Why and How

    PubMed Central

    Müller, Peter; Mitra, Riten

    2013-01-01

    We review inference under models with nonparametric Bayesian (BNP) priors. The discussion follows a set of examples for some common inference problems. The examples are chosen to highlight problems that are challenging for standard parametric inference. We discuss inference for density estimation, clustering, regression and for mixed effects models with random effects distributions. While we focus on arguing for the need for the flexibility of BNP models, we also review some of the more commonly used BNP models, thus hopefully answering a bit of both questions, why and how to use BNP. PMID:24368932

  18. A nonparametric software-reliability growth model

    NASA Technical Reports Server (NTRS)

    Sofer, Ariela; Miller, Douglas R.

    1991-01-01

    The authors (1985) previously introduced a nonparametric model for software-reliability growth which is based on complete monotonicity of the failure rate. The authors extend the completely monotone software model by developing a method for providing long-range predictions of reliability growth, based on the model. They derive upper and lower bounds on extrapolation of the failure rate and the mean function. These are then used to obtain estimates for the future software failure rate and the mean future number of failures. Preliminary evaluation indicates that the method is competitive with parametric approaches, while being more robust.

  19. Bayesian nonparametric models for ranked set sampling.

    PubMed

    Gemayel, Nader; Stasny, Elizabeth A; Wolfe, Douglas A

    2015-04-01

    Ranked set sampling (RSS) is a data collection technique that combines measurement with judgment ranking for statistical inference. This paper lays out a formal and natural Bayesian framework for RSS that is analogous to its frequentist justification, and that does not require the assumption of perfect ranking or use of any imperfect ranking models. Prior beliefs about the judgment order statistic distributions and their interdependence are embodied by a nonparametric prior distribution. Posterior inference is carried out by means of Markov chain Monte Carlo techniques, and yields estimators of the judgment order statistic distributions (and of functionals of those distributions). PMID:25326663

  20. Lottery Spending: A Non-Parametric Analysis

    PubMed Central

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699

  1. Correcting eddy-covariance flux underestimates over a grassland.

    SciTech Connect

    Twine, T. E.; Kustas, W. P.; Norman, J. M.; Cook, D. R.; Houser, P. R.; Meyers, T. P.; Prueger, J. H.; Starks, P. J.; Wesely, M. L.; Environmental Research; Univ. of Wisconsin at Madison; DOE; National Aeronautics and Space Administration; National Oceanic and Atmospheric Administrationoratory

    2000-06-08

    Independent measurements of the major energy balance flux components are not often consistent with the principle of conservation of energy. This is referred to as a lack of closure of the surface energy balance. Most results in the literature have shown the sum of sensible and latent heat fluxes measured by eddy covariance to be less than the difference between net radiation and soil heat fluxes. This under-measurement of sensible and latent heat fluxes by eddy-covariance instruments has occurred in numerous field experiments and among many different manufacturers of instruments. Four eddy-covariance systems consisting of the same models of instruments were set up side-by-side during the Southern Great Plains 1997 Hydrology Experiment and all systems under-measured fluxes by similar amounts. One of these eddy-covariance systems was collocated with three other types of eddy-covariance systems at different sites; all of these systems under-measured the sensible and latent-heat fluxes. The net radiometers and soil heat flux plates used in conjunction with the eddy-covariance systems were calibrated independently and measurements of net radiation and soil heat flux showed little scatter for various sites. The 10% absolute uncertainty in available energy measurements was considerably smaller than the systematic closure problem in the surface energy budget, which varied from 10 to 30%. When available-energy measurement errors are known and modest, eddy-covariance measurements of sensible and latent heat fluxes should be adjusted for closure. Although the preferred method of energy balance closure is to maintain the Bowen-ratio, the method for obtaining closure appears to be less important than assuring that eddy-covariance measurements are consistent with conservation of energy. Based on numerous measurements over a sorghum canopy, carbon dioxide fluxes, which are measured by eddy covariance, are underestimated by the same factor as eddy covariance evaporation

  2. Nonparametric discriminant analysis for face recognition.

    PubMed

    Li, Zhifeng; Lin, Dahua; Tang, Xiaoou

    2009-04-01

    In this paper, we develop a new framework for face recognition based on nonparametric discriminant analysis (NDA) and multi-classifier integration. Traditional LDA-based methods suffer a fundamental limitation originating from the parametric nature of scatter matrices, which are based on the Gaussian distribution assumption. The performance of these methods notably degrades when the actual distribution is Non-Gaussian. To address this problem, we propose a new formulation of scatter matrices to extend the two-class nonparametric discriminant analysis to multi-class cases. Then, we develop two more improved multi-class NDA-based algorithms (NSA and NFA) with each one having two complementary methods based on the principal space and the null space of the intra-class scatter matrix respectively. Comparing to the NSA, the NFA is more effective in the utilization of the classification boundary information. In order to exploit the complementary nature of the two kinds of NFA (PNFA and NNFA), we finally develop a dual NFA-based multi-classifier fusion framework by employing the over complete Gabor representation to boost the recognition performance. We show the improvements of the developed new algorithms over the traditional subspace methods through comparative experiments on two challenging face databases, Purdue AR database and XM2VTS database. PMID:19229090

  3. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    PubMed

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms. PMID:27046838

  4. Stardust Navigation Covariance Analysis

    NASA Astrophysics Data System (ADS)

    Menon, Premkumar R.

    2000-01-01

    The Stardust spacecraft was launched on February 7, 1999 aboard a Boeing Delta-II rocket. Mission participants include the National Aeronautics and Space Administration (NASA), the Jet Propulsion Laboratory (JPL), Lockheed Martin Astronautics (LMA) and the University of Washington. The primary objective of the mission is to collect in-situ samples of the coma of comet Wild-2 and return those samples to the Earth for analysis. Mission design and operational navigation for Stardust is performed by the Jet Propulsion Laboratory (JPL). This paper will describe the extensive JPL effort in support of the Stardust pre-launch analysis of the orbit determination component of the mission covariance study. A description of the mission and it's trajectory will be provided first, followed by a discussion of the covariance procedure and models. Predicted accuracy's will be examined as they relate to navigation delivery requirements for specific critical events during the mission. Stardust was launched into a heliocentric trajectory in early 1999. It will perform an Earth Gravity Assist (EGA) on January 15, 2001 to acquire an orbit for the eventual rendezvous with comet Wild-2. The spacecraft will fly through the coma (atmosphere) on the dayside of Wild-2 on January 2, 2004. At that time samples will be obtained using an aerogel collector. After the comet encounter Stardust will return to Earth when the Sample Return Capsule (SRC) will separate and land at the Utah Test Site (UTTR) on January 15, 2006. The spacecraft will however be deflected off into a heliocentric orbit. The mission is divided into three phases for the covariance analysis. They are 1) Launch to EGA, 2) EGA to Wild-2 encounter and 3) Wild-2 encounter to Earth reentry. Orbit determination assumptions for each phase are provided. These include estimated and consider parameters and their associated a-priori uncertainties. Major perturbations to the trajectory include 19 deterministic and statistical maneuvers

  5. Nonparametric identification and maximum likelihood estimation for hidden Markov models

    PubMed Central

    Alexandrovich, G.; Holzmann, H.; Leister, A.

    2016-01-01

    Nonparametric identification and maximum likelihood estimation for finite-state hidden Markov models are investigated. We obtain identification of the parameters as well as the order of the Markov chain if the transition probability matrices have full-rank and are ergodic, and if the state-dependent distributions are all distinct, but not necessarily linearly independent. Based on this identification result, we develop a nonparametric maximum likelihood estimation theory. First, we show that the asymptotic contrast, the Kullback–Leibler divergence of the hidden Markov model, also identifies the true parameter vector nonparametrically. Second, for classes of state-dependent densities which are arbitrary mixtures of a parametric family, we establish the consistency of the nonparametric maximum likelihood estimator. Here, identification of the mixing distributions need not be assumed. Numerical properties of the estimates and of nonparametric goodness of fit tests are investigated in a simulation study.

  6. Nonparametric dark energy reconstruction from supernova data.

    PubMed

    Holsclaw, Tracy; Alam, Ujjaini; Sansó, Bruno; Lee, Herbert; Heitmann, Katrin; Habib, Salman; Higdon, David

    2010-12-10

    Understanding the origin of the accelerated expansion of the Universe poses one of the greatest challenges in physics today. Lacking a compelling fundamental theory to test, observational efforts are targeted at a better characterization of the underlying cause. If a new form of mass-energy, dark energy, is driving the acceleration, the redshift evolution of the equation of state parameter w(z) will hold essential clues as to its origin. To best exploit data from observations it is necessary to develop a robust and accurate reconstruction approach, with controlled errors, for w(z). We introduce a new, nonparametric method for solving the associated statistical inverse problem based on Gaussian process modeling and Markov chain Monte Carlo sampling. Applying this method to recent supernova measurements, we reconstruct the continuous history of w out to redshift z=1.5. PMID:21231517

  7. Nonparametric k -nearest-neighbor entropy estimator

    NASA Astrophysics Data System (ADS)

    Lombardi, Damiano; Pant, Sanjay

    2016-01-01

    A nonparametric k -nearest-neighbor-based entropy estimator is proposed. It improves on the classical Kozachenko-Leonenko estimator by considering nonuniform probability densities in the region of k -nearest neighbors around each sample point. It aims to improve the classical estimators in three situations: first, when the dimensionality of the random variable is large; second, when near-functional relationships leading to high correlation between components of the random variable are present; and third, when the marginal variances of random variable components vary significantly with respect to each other. Heuristics on the error of the proposed and classical estimators are presented. Finally, the proposed estimator is tested for a variety of distributions in successively increasing dimensions and in the presence of a near-functional relationship. Its performance is compared with a classical estimator, and a significant improvement is demonstrated.

  8. Nonparametric k-nearest-neighbor entropy estimator.

    PubMed

    Lombardi, Damiano; Pant, Sanjay

    2016-01-01

    A nonparametric k-nearest-neighbor-based entropy estimator is proposed. It improves on the classical Kozachenko-Leonenko estimator by considering nonuniform probability densities in the region of k-nearest neighbors around each sample point. It aims to improve the classical estimators in three situations: first, when the dimensionality of the random variable is large; second, when near-functional relationships leading to high correlation between components of the random variable are present; and third, when the marginal variances of random variable components vary significantly with respect to each other. Heuristics on the error of the proposed and classical estimators are presented. Finally, the proposed estimator is tested for a variety of distributions in successively increasing dimensions and in the presence of a near-functional relationship. Its performance is compared with a classical estimator, and a significant improvement is demonstrated. PMID:26871193

  9. A nonparametric approach for European option valuation

    NASA Astrophysics Data System (ADS)

    Huang, Guanghui; Wan, Jianping

    2008-04-01

    A nonparametric approach for European option valuation is proposed in this paper, which adopts a purely jump model to describe the price dynamics of the underlying asset, and the minimal entropy martingale measure for those jumps is used as the pricing measure of this market. A simple Monte Carlo simulation method is proposed to calculate the price of derivatives under this risk neural measure. And the volatility of the spot market can be renewed automatically without particular specification in the proposed method. The performances of the proposed method are compared to that of the Black-Scholes formula in an artificial world and the real world. The results of our investigations suggest that the proposed method is a valuable method.

  10. A Nonparametric Bayesian Model for Nested Clustering.

    PubMed

    Lee, Juhee; Müller, Peter; Zhu, Yitan; Ji, Yuan

    2016-01-01

    We propose a nonparametric Bayesian model for clustering where clusters of experimental units are determined by a shared pattern of clustering another set of experimental units. The proposed model is motivated by the analysis of protein activation data, where we cluster proteins such that all proteins in one cluster give rise to the same clustering of patients. That is, we define clusters of proteins by the way that patients group with respect to the corresponding protein activations. This is in contrast to (almost) all currently available models that use shared parameters in the sampling model to define clusters. This includes in particular model based clustering, Dirichlet process mixtures, product partition models, and more. We show results for two typical biostatistical inference problems that give rise to clustering. PMID:26519174

  11. Nonparametric Bayesian evaluation of differential protein quantification

    PubMed Central

    Cansizoglu, A. Ertugrul; Käll, Lukas; Steen, Hanno

    2013-01-01

    Arbitrary cutoffs are ubiquitous in quantitative computational proteomics: maximum acceptable MS/MS PSM or peptide q–value, minimum ion intensity to calculate a fold change, the minimum number of peptides that must be available to trust the estimated protein fold change (or the minimum number of PSMs that must be available to trust the estimated peptide fold change), and the “significant” fold change cutoff. Here we introduce a novel experimental setup and nonparametric Bayesian algorithm for determining the statistical quality of a proposed differential set of proteins or peptides. By comparing putatively non-changing case-control evidence to an empirical null distribution derived from a control-control experiment, we successfully avoid some of these common parameters. We then apply our method to evaluating different fold change rules and find that, for our data, a 1.2-fold change is the most permissive of the plausible fold change rules. PMID:24024742

  12. Nonparametric inference of network structure and dynamics

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among

  13. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.

  14. Essays in applied macroeconomics: Asymmetric price adjustment, exchange rate and treatment effect

    NASA Astrophysics Data System (ADS)

    Gu, Jingping

    This dissertation consists of three essays. Chapter II examines the possible asymmetric response of gasoline prices to crude oil price changes using an error correction model with GARCH errors. Recent papers have looked at this issue. Some of these papers estimate a form of error correction model, but none of them accounts for autoregressive heteroskedasticity in estimation and testing for asymmetry and none of them takes the response of crude oil price into consideration. We find that time-varying volatility of gasoline price disturbances is an important feature of the data, and when we allow for asymmetric GARCH errors and investigate the system wide impulse response function, we find evidence of asymmetric adjustment to crude oil price changes in weekly retail gasoline prices. Chapter III discusses the relationship between fiscal deficit and exchange rate. Economic theory predicts that fiscal deficits can significantly affect real exchange rate movements, but existing empirical evidence reports only a weak impact of fiscal deficits on exchange rates. Based on US dollar-based real exchange rates in G5 countries and a flexible varying coefficient model, we show that the previously documented weak relationship between fiscal deficits and exchange rates may be the result of additive specifications, and that the relationship is stronger if we allow fiscal deficits to impact real exchange rates non-additively as well as nonlinearly. We find that the speed of exchange rate adjustment toward equilibrium depends on the state of the fiscal deficit; a fiscal contraction in the US can lead to less persistence in the deviation of exchange rates from fundamentals, and faster mean reversion to the equilibrium. Chapter IV proposes a kernel method to deal with the nonparametric regression model with only discrete covariates as regressors. This new approach is based on recently developed least squares cross-validation kernel smoothing method. It can not only automatically smooth

  15. The incredible shrinking covariance estimator

    NASA Astrophysics Data System (ADS)

    Theiler, James

    2012-05-01

    Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.

  16. Nonparametric regression of state occupation, entry, exit, and waiting times with multistate right-censored data.

    PubMed

    Mostajabi, Farida; Datta, Somnath

    2013-07-30

    We construct nonparametric regression estimators of a number of temporal functions in a multistate system based on a continuous univariate baseline covariate. These estimators include state occupation probabilities, state entry, exit, and waiting (sojourn) time distribution functions of a general progressive (e.g., acyclic) multistate model. We subject the data to right censoring, and the censoring mechanism is explainable by observable covariates that could be time dependent. The resulting estimators are valid even if the multistate process is non-Markov. We study the performance of the estimators in two simulation settings. We establish large sample consistency of these estimators. We illustrate our estimators using a data set on bone marrow transplant recipients. PMID:23225570

  17. Covariant Electrodynamics in Vacuum

    NASA Astrophysics Data System (ADS)

    Wilhelm, H. E.

    1990-05-01

    The generalized Galilei covariant Maxwell equations and their EM field transformations are applied to the vacuum electrodynamics of a charged particle moving with an arbitrary velocity v in an inertial frame with EM carrier (ether) of velocity w. In accordance with the Galilean relativity principle, all velocities have absolute meaning (relative to the ether frame with isotropic light propagation), and the relative velocity of two bodies is defined by the linear relation uG = v1 - v2. It is shown that the electric equipotential surfaces of a charged particle are compressed in the direction parallel to its relative velocity v - w (mechanism for physical length contraction of bodies). The magnetic field H(r, t) excited in the ether by a charge e moving uniformly with velocity v is related to its electric field E(r, t) by the equation H=ɛ0(v - w)xE/[ 1 +w • (t>- w)/c20], which shows that (i) a magnetic field is excited only if the charge moves relative to the ether, and (ii) the magnetic field is weak if v - w is not comparable to the velocity of light c0 . It is remarkable that a charged particle can excite EM shock waves in the ether if |i> - w > c0. This condition is realizable for anti-parallel charge and ether velocities if |v-w| > c0- | w|, i.e., even if |v| is subluminal. The possibility of this Cerenkov effect in the ether is discussed for terrestrial and galactic situations

  18. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    PubMed

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline. PMID:24400941

  19. A fuzzy, nonparametric segmentation framework for DTI and MRI analysis: with applications to DTI-tract extraction.

    PubMed

    Awate, Suyash P; Zhang, Hui; Gee, James C

    2007-11-01

    This paper presents a novel fuzzy-segmentation method for diffusion tensor (DT) and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g., those based on fuzzy C means (FCM), incorporate Gaussian class models that are inherently biased towards ellipsoidal clusters characterized by a mean element and a covariance matrix. Tensors in fiber bundles, however, inherently lie on specific manifolds in Riemannian spaces. Unlike FCM-based schemes, the proposed method represents these manifolds using nonparametric data-driven statistical models. The paper describes a statistically-sound (consistent) technique for nonparametric modeling in Riemannian DT spaces. The proposed method produces an optimal fuzzy segmentation by maximizing a novel information-theoretic energy in a Markov-random-field framework. Results on synthetic and real, DT and MR images, show that the proposed method provides information about the uncertainties in the segmentation decisions, which stem from imaging artifacts including noise, partial voluming, and inhomogeneity. By enhancing the nonparametric model to capture the spatial continuity and structure of the fiber bundle, we exploit the framework to extract the cingulum fiber bundle. Typical tractography methods for tract delineation, incorporating thresholds on fractional anisotropy and fiber curvature to terminate tracking, can face serious problems arising from partial voluming and noise. For these reasons, tractography often fails to extract thin tracts with sharp changes in orientation, such as the cingulum. The results demonstrate that the proposed method extracts this structure significantly more accurately as compared to tractography. PMID:18041267

  20. Covariant Closed String Coherent States

    SciTech Connect

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops.

  1. Covariant closed string coherent states.

    PubMed

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops. PMID:21405564

  2. Bayesian nonparametric regression with varying residual density.

    PubMed

    Pati, Debdeep; Dunson, David B

    2014-02-01

    We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053

  3. Nonparametric methods in actigraphy: An update

    PubMed Central

    Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.

    2014-01-01

    Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921

  4. NONPARAMETRIC BAYESIAN ESTIMATION OF PERIODIC LIGHT CURVES

    SciTech Connect

    Wang Yuyang; Khardon, Roni; Protopapas, Pavlos

    2012-09-01

    Many astronomical phenomena exhibit patterns that have periodic behavior. An important step when analyzing data from such processes is the problem of identifying the period: estimating the period of a periodic function based on noisy observations made at irregularly spaced time points. This problem is still a difficult challenge despite extensive study in different disciplines. This paper makes several contributions toward solving this problem. First, we present a nonparametric Bayesian model for period finding, based on Gaussian Processes (GPs), that does not make assumptions on the shape of the periodic function. As our experiments demonstrate, the new model leads to significantly better results in period estimation especially when the light curve does not exhibit sinusoidal shape. Second, we develop a new algorithm for parameter optimization for GP which is useful when the likelihood function is very sensitive to the parameters with numerous local minima, as in the case of period estimation. The algorithm combines gradient optimization with grid search and incorporates several mechanisms to overcome the high computational complexity of GP. Third, we develop a novel approach for using domain knowledge, in the form of a probabilistic generative model, and incorporate it into the period estimation algorithm. Experimental results validate our approach showing significant improvement over existing methods.

  5. Non-parametric estimation of morphological lopsidedness

    NASA Astrophysics Data System (ADS)

    Giese, Nadine; van der Hulst, Thijs; Serra, Paolo; Oosterloo, Tom

    2016-09-01

    Asymmetries in the neutral hydrogen gas distribution and kinematics of galaxies are thought to be indicators for both gas accretion and gas removal processes. These are of fundamental importance for galaxy formation and evolution. Upcoming large blind H I surveys will provide tens of thousands of galaxies for a study of these asymmetries in a proper statistical way. Due to the large number of expected sources and the limited resolution of the majority of objects, detailed modelling is not feasible for most detections. We need fast, automatic and sensitive methods to classify these objects in an objective way. Existing non-parametric methods suffer from effects like the dependence on signal to noise, resolution and inclination. Here we show how to correctly take these effects into account and show ways to estimate the precision of the methods. We will use existing and modelled data to give an outlook on the performance expected for galaxies observed in the various sky surveys planned for e.g. WSRT/APERTIF and ASKAP.

  6. Nonparametric estimation of the rediscovery rate.

    PubMed

    Lee, Donghwan; Ganna, Andrea; Pawitan, Yudi; Lee, Woojoo

    2016-08-15

    Validation studies have been used to increase the reliability of the statistical conclusions for scientific discoveries; such studies improve the reproducibility of the findings and reduce the possibility of false positives. Here, one of the important roles of statistics is to quantify reproducibility rigorously. Two concepts were recently defined for this purpose: (i) rediscovery rate (RDR), which is the expected proportion of statistically significant findings in a study that can be replicated in the validation study and (ii) false discovery rate in the validation study (vFDR). In this paper, we aim to develop a nonparametric approach to estimate the RDR and vFDR and show an explicit link between the RDR and the FDR. Among other things, the link explains why reproducing statistically significant results even with low FDR level may be difficult. Two metabolomics datasets are considered to illustrate the application of the RDR and vFDR concepts in high-throughput data analysis. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26910365

  7. White matter microstructure from nonparametric axon diameter distribution mapping.

    PubMed

    Benjamini, Dan; Komlosh, Michal E; Holtzclaw, Lynne A; Nevo, Uri; Basser, Peter J

    2016-07-15

    We report the development of a double diffusion encoding (DDE) MRI method to estimate and map the axon diameter distribution (ADD) within an imaging volume. A variety of biological processes, ranging from development to disease and trauma, may lead to changes in the ADD in the central and peripheral nervous systems. Unlike previously proposed methods, this ADD experimental design and estimation framework employs a more general, nonparametric approach, without a priori assumptions about the underlying form of the ADD, making it suitable to analyze abnormal tissue. In the current study, this framework was used on an ex vivo ferret spinal cord, while emphasizing the way in which the ADD can be weighted by either the number or the volume of the axons. The different weightings, which result in different spatial contrasts, were considered throughout this work. DDE data were analyzed to derive spatially resolved maps of average axon diameter, ADD variance, and extra-axonal volume fraction, along with a novel sub-micron restricted structures map. The morphological information contained in these maps was then used to segment white matter into distinct domains by using a proposed k-means clustering algorithm with spatial contiguity and left-right symmetry constraints, resulting in identifiable white matter tracks. The method was validated by comparing histological measures to the estimated ADDs using a quantitative similarity metric, resulting in good agreement. With further acquisition acceleration and experimental parameters adjustments, this ADD estimation framework could be first used preclinically, and eventually clinically, enabling a wide range of neuroimaging applications for improved understanding of neurodegenerative pathologies and assessing microstructural changes resulting from trauma. PMID:27126002

  8. Evaluation of Tungsten Nuclear Reaction Data with Covariances

    SciTech Connect

    Trkov, A. Capote, R.; Kodeli, I.; Leal, L.

    2008-12-15

    As a follow-up of the work presented at the ND-2007 conference in Nice, additional fast reactor benchmarks were analyzed. Adjustment to the cross sections in the keV region was necessary. Evaluated neutron cross section data files for {sup 180,182,183,184,186}W isotopes were produced. Covariances were generated for all isotopes except {sup 180}W. In the resonance range the retro-active method was used. Above the resolved resonance range the covariance prior was generated by the Monte Carlo technique from nuclear model calculations with the Empire-II code. Experimental data were taken into account through the GANDR system using the generalized least-squares technique. Introducing experimental data results in relatively small changes in the cross sections, but greatly constrains the uncertainties. The covariance files are currently undergoing testing.

  9. Evaluation of Tungsten Nuclear Reaction Data with Covariances

    SciTech Connect

    Trkov, A.; Capote, R.; Kodeli, I.; Leal, Luiz C.

    2008-12-01

    As a follow-up of the work presented at the ND-2007 conference in Nice, additional fast reactor benchmarks were analyzed. Adjustment to the cross sections in the keV region was necessary. Evaluated neutron cross section data files for 180,182,183,184,186W isotopes were produced. Covariances were generated for all isotopes except 180W. In the resonance range the retro-active method was used. Above the resolved resonance range the covariance prior was generated by the Monte Carlo technique from nuclear model calculations with the Empire-II code. Experimental data were taken into account through the GANDR system using the generalized least-squares technique. Introducing experimental data results in relatively small changes in the cross sections, but greatly constrains the uncertainties. The covariance files are currently undergoing testing.

  10. Shrinkage estimators for covariance matrices.

    PubMed

    Daniels, M J; Kass, R E

    2001-12-01

    Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically

  11. A nonparametric spatial model for periodontal data with non-random missingness.

    PubMed

    Reich, Brian J; Bandyopadhyay, Dipankar; Bondell, Howard D

    2013-09-01

    Periodontal disease progression is often quantified by clinical attachment level (CAL) defined as the distance down a tooth's root that is detached from the surrounding bone. Measured at 6 locations per tooth throughout the mouth (excluding the molars), it gives rise to a dependent data set-up. These data are often reduced to a one-number summary, such as the whole mouth average or the number of observations greater than a threshold, to be used as the response in a regression to identify important covariates related to the current state of a subject's periodontal health. Rather than a simple one-number summary, we set forward to analyze all available CAL data for each subject, exploiting the presence of spatial dependence, non-stationarity, and non-normality. Also, many subjects have a considerable proportion of missing teeth which cannot be considered missing at random because periodontal disease is the leading cause of adult tooth loss. Under a Bayesian paradigm, we propose a nonparametric flexible spatial (joint) model of observed CAL and the location of missing tooth via kernel convolution methods, incorporating the aforementioned features of CAL data under a unified framework. Application of this methodology to a data set recording the periodontal health of an African-American population, as well as simulation studies reveal the gain in model fit and inference, and provides a new perspective into unraveling covariate-response relationships in presence of complexities posed by these data. PMID:24288421

  12. Unveiling acoustic physics of the CMB using nonparametric estimation of the temperature angular power spectrum for Planck

    SciTech Connect

    Aghamousa, Amir; Shafieloo, Arman; Arjunwadkar, Mihir; Souradeep, Tarun E-mail: shafieloo@kasi.re.kr E-mail: tarun@iucaa.ernet.in

    2015-02-01

    Estimation of the angular power spectrum is one of the important steps in Cosmic Microwave Background (CMB) data analysis. Here, we present a nonparametric estimate of the temperature angular power spectrum for the Planck 2013 CMB data. The method implemented in this work is model-independent, and allows the data, rather than the model, to dictate the fit. Since one of the main targets of our analysis is to test the consistency of the ΛCDM model with Planck 2013 data, we use the nuisance parameters associated with the best-fit ΛCDM angular power spectrum to remove foreground contributions from the data at multipoles ℓ ≥50. We thus obtain a combined angular power spectrum data set together with the full covariance matrix, appropriately weighted over frequency channels. Our subsequent nonparametric analysis resolves six peaks (and five dips) up to ℓ ∼1850 in the temperature angular power spectrum. We present uncertainties in the peak/dip locations and heights at the 95% confidence level. We further show how these reflect the harmonicity of acoustic peaks, and can be used for acoustic scale estimation. Based on this nonparametric formalism, we found the best-fit ΛCDM model to be at 36% confidence distance from the center of the nonparametric confidence set—this is considerably larger than the confidence distance (9%) derived earlier from a similar analysis of the WMAP 7-year data. Another interesting result of our analysis is that at low multipoles, the Planck data do not suggest any upturn, contrary to the expectation based on the integrated Sachs-Wolfe contribution in the best-fit ΛCDM cosmology.

  13. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  14. AFCI-2.0 Neutron Cross Section Covariance Library

    SciTech Connect

    Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.

    2011-03-01

    The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural

  15. Adjustment disorder

    MedlinePlus

    American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, Va: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Rosenbaum ...

  16. Fully Bayesian inference under ignorable missingness in the presence of auxiliary covariates

    PubMed Central

    Daniels, M.J.; Wang, C.; Marcus, B.H.

    2014-01-01

    In order to make a missing at random (MAR) or ignorability assumption realistic, auxiliary covariates are often required. However, the auxiliary covariates are not desired in the model for inference. Typical multiple imputation approaches do not assume that the imputation model marginalizes to the inference model. This has been termed ‘uncongenial’ (Meng, 1994). In order to make the two models congenial (or compatible), we would rather not assume a parametric model for the marginal distribution of the auxiliary covariates, but we typically do not have enough data to estimate the joint distribution well non-parametrically. In addition, when the imputation model uses a non-linear link function (e.g., the logistic link for a binary response), the marginalization over the auxiliary covariates to derive the inference model typically results in a difficult to interpret form for effect of covariates. In this article, we propose a fully Bayesian approach to ensure that the models are compatible for incomplete longitudinal data by embedding an interpretable inference model within an imputation model and that also addresses the two complications described above. We evaluate the approach via simulations and implement it on a recent clinical trial. PMID:24571539

  17. A class of covariate-dependent spatiotemporal covariance functions

    PubMed Central

    Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.

    2014-01-01

    In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199

  18. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. PMID:26224325

  19. Are Eddy Covariance series stationary?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spectral analysis via a discrete Fourier transform is used often to examine eddy covariance series for cycles (eddies) of interest. Generally the analysis is performed on hourly or half-hourly data sets collected at 10 or 20 Hz. Each original series is often assumed to be stationary. Also automated ...

  20. Covariance Modifications to Subspace Bases

    SciTech Connect

    Harris, D B

    2008-11-19

    Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the

  1. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  2. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    PubMed

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  3. Graph embedded nonparametric mutual information for supervised dimensionality reduction.

    PubMed

    Bouzas, Dimitrios; Arvanitopoulos, Nikolaos; Tefas, Anastasios

    2015-05-01

    In this paper, we propose a novel algorithm for dimensionality reduction that uses as a criterion the mutual information (MI) between the transformed data and their corresponding class labels. The MI is a powerful criterion that can be used as a proxy to the Bayes error rate. Furthermore, recent quadratic nonparametric implementations of MI are computationally efficient and do not require any prior assumptions about the class densities. We show that the quadratic nonparametric MI can be formulated as a kernel objective in the graph embedding framework. Moreover, we propose its linear equivalent as a novel linear dimensionality reduction algorithm. The derived methods are compared against the state-of-the-art dimensionality reduction algorithms with various classifiers and on various benchmark and real-life datasets. The experimental results show that nonparametric MI as an optimization objective for dimensionality reduction gives comparable and in most of the cases better results compared with other dimensionality reduction methods. PMID:25881367

  4. Predicting Market Impact Costs Using Nonparametric Machine Learning Models

    PubMed Central

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235

  5. A new approach to modeling covariate effects and individualization in population pharmacokinetics-pharmacodynamics.

    PubMed

    Lai, Tze Leung; Shih, Mei-Chiung; Wong, Samuel P

    2006-02-01

    By combining Laplace's approximation and Monte Carlo methods to evaluate multiple integrals, this paper develops a new approach to estimation in nonlinear mixed effects models that are widely used in population pharmacokinetics and pharmacodynamics. Estimation here involves not only estimating the model parameters from Phase I and II studies but also using the fitted model to estimate the concentration versus time curve or the drug effects of a subject who has covariate information but sparse measurements. Because of its computational tractability, the proposed approach can model the covariate effects nonparametrically by using (i) regression splines or neural networks as basis functions and (ii) AIC or BIC for model selection. Its computational and statistical advantages are illustrated in simulation studies and in Phase I trials. PMID:16402288

  6. Adjustable microforceps.

    PubMed

    Bao, J Y

    1991-04-01

    The commonly used microforceps have a much greater opening distance and spring resistance than needed. A piece of plastic ring or rubber band can be used to adjust the opening distance and reduce most of the spring resistance, making the user feel more comfortable and less fatigued. PMID:2051437

  7. Neutron Cross Section Covariances for Structural Materials and Fission Products

    NASA Astrophysics Data System (ADS)

    Hoblit, S.; Cho, Y.-S.; Herman, M.; Mattoon, C. M.; Mughabghab, S. F.; Obložinský, P.; Pigni, M. T.; Sonzogni, A. A.

    2011-12-01

    We describe neutron cross section covariances for 78 structural materials and fission products produced for the new US evaluated nuclear reaction library ENDF/B-VII.1. Neutron incident energies cover full range from 10 eV to 20 MeV and covariances are primarily provided for capture, elastic and inelastic scattering as well as (n,2n). The list of materials follows priorities defined by the Advanced Fuel Cycle Initiative, the major application being data adjustment for advanced fast reactor systems. Thus, in addition to 28 structural materials and 49 fission products, the list includes also 23Na which is important fast reactor coolant. Due to extensive amount of materials, we adopted a variety of methodologies depending on the priority of a specific material. In the resolved resonance region we primarily used resonance parameter uncertainties given in Atlas of Neutron Resonances and either applied the kernel approximation to propagate these uncertainties into cross section uncertainties or resorted to simplified estimates based on integral quantities. For several priority materials we adopted MF32 covariances produced by SAMMY at ORNL, modified by us by adding MF33 covariances to account for systematic uncertainties. In the fast neutron region we resorted to three methods. The most sophisticated was EMPIRE-KALMAN method which combines experimental data from EXFOR library with nuclear reaction modeling and least-squares fitting. The two other methods used simplified estimates, either based on the propagation of nuclear reaction model parameter uncertainties or on a dispersion analysis of central cross section values in recent evaluated data files. All covariances were subject to quality assurance procedures adopted recently by CSEWG. In addition, tools were developed to allow inspection of processed covariances and computed integral quantities, and for comparing these values to data from the Atlas and the astrophysics database KADoNiS.

  8. Neutron Cross Section Covariances for Structural Materials and Fission Products

    SciTech Connect

    Hoblit, S.; Hoblit,S.; Cho,Y.-S.; Herman,M.; Mattoon,C.M.; Mughabghab,S.F.; Oblozinsky,P.; Pigni,M.T.; Sonzogni,A.A.

    2011-12-01

    We describe neutron cross section covariances for 78 structural materials and fission products produced for the new US evaluated nuclear reaction library ENDF/B-VII.1. Neutron incident energies cover full range from 10{sup -5} eV to 20 MeV and covariances are primarily provided for capture, elastic and inelastic scattering as well as (n,2n). The list of materials follows priorities defined by the Advanced Fuel Cycle Initiative, the major application being data adjustment for advanced fast reactor systems. Thus, in addition to 28 structural materials and 49 fission products, the list includes also {sup 23}Na which is important fast reactor coolant. Due to extensive amount of materials, we adopted a variety of methodologies depending on the priority of a specific material. In the resolved resonance region we primarily used resonance parameter uncertainties given in Atlas of Neutron Resonances and either applied the kernel approximation to propagate these uncertainties into cross section uncertainties or resorted to simplified estimates based on integral quantities. For several priority materials we adopted MF32 covariances produced by SAMMY at ORNL, modified by us by adding MF33 covariances to account for systematic uncertainties. In the fast neutron region we resorted to three methods. The most sophisticated was EMPIRE-KALMAN method which combines experimental data from EXFOR library with nuclear reaction modeling and least-squares fitting. The two other methods used simplified estimates, either based on the propagation of nuclear reaction model parameter uncertainties or on a dispersion analysis of central cross section values in recent evaluated data files. All covariances were subject to quality assurance procedures adopted recently by CSEWG. In addition, tools were developed to allow inspection of processed covariances and computed integral quantities, and for comparing these values to data from the Atlas and the astrophysics database KADoNiS.

  9. Covariate analysis of survival data: a small-sample study of Cox's model

    SciTech Connect

    Johnson, M.E.; Tolley, H.D.; Bryson, M.C.; Goldman, A.S.

    1982-09-01

    Cox's proportional-hazards model is frequently used to adjust for covariate effects in survival-data analysis. The small-sample performances of the maximum partial likelihood estimators of the regression parameters in a two-covariate hazard function model are evaluated with respect to bias, variance, and power in hypothesis tests. Previous Monte Carlo work on the two-sample problem is reviewed.

  10. Minimal unitary (covariant) scattering theory

    SciTech Connect

    Lindesay, J.V.; Markevich, A.

    1983-06-01

    In the minimal three particle equations developed by Lindesay the two body input amplitude was an on shell relativistic generalization of the non-relativistic scattering model characterized by a single mass parameter ..mu.. which in the two body (m + m) system looks like an s-channel bound state (..mu.. < 2m) or virtual state (..mu.. > 2m). Using this driving term in covariant Faddeev equations generates a rich covariant and unitary three particle dynamics. However, the simplest way of writing the relativisitic generalization of the Faddeev equations can take the on shell Mandelstam parameter s = 4(q/sup 2/ + m/sup 2/), in terms of which the two particle input is expressed, to negative values in the range of integration required by the dynamics. This problem was met in the original treatment by multiplying the two particle input amplitude by THETA(s). This paper provides what we hope to be a more direct way of meeting the problem.

  11. Covariant jump conditions in electromagnetism

    NASA Astrophysics Data System (ADS)

    Itin, Yakov

    2012-02-01

    A generally covariant four-dimensional representation of Maxwell's electrodynamics in a generic material medium can be achieved straightforwardly in the metric-free formulation of electromagnetism. In this setup, the electromagnetic phenomena are described by two tensor fields, which satisfy Maxwell's equations. A generic tensorial constitutive relation between these fields is an independent ingredient of the theory. By use of different constitutive relations (local and non-local, linear and non-linear, etc.), a wide area of applications can be covered. In the current paper, we present the jump conditions for the fields and for the energy-momentum tensor on an arbitrarily moving surface between two media. From the differential and integral Maxwell equations, we derive the covariant boundary conditions, which are independent of any metric and connection. These conditions include the covariantly defined surface current and are applicable to an arbitrarily moving smooth curved boundary surface. As an application of the presented jump formulas, we derive a Lorentzian type metric as a condition for existence of the wave front in isotropic media. This result holds for ordinary materials as well as for metamaterials with negative material constants.

  12. Determination of Resonance Parameters and their Covariances from Neutron Induced Reaction Cross Section Data

    SciTech Connect

    Schillebeeckx, P.; Becker, B.; Danon, Y.; Guber, K.; Harada, H.; Heyse, J.; Junghans, A.R.; Kopecky, S.; Massimi, C.; Moxon, M.C.; Otuka, N.; Sirakov, I.; Volev, K.

    2012-12-15

    Cross section data in the resolved and unresolved resonance region are represented by nuclear reaction formalisms using parameters which are determined by fitting them to experimental data. Therefore, the quality of evaluated cross sections in the resonance region strongly depends on the experimental data used in the adjustment process and an assessment of the experimental covariance data is of primary importance in determining the accuracy of evaluated cross section data. In this contribution, uncertainty components of experimental observables resulting from total and reaction cross section experiments are quantified by identifying the metrological parameters involved in the measurement, data reduction and analysis process. In addition, different methods that can be applied to propagate the covariance of the experimental observables (i.e. transmission and reaction yields) to the covariance of the resonance parameters are discussed and compared. The methods being discussed are: conventional uncertainty propagation, Monte Carlo sampling and marginalization. It is demonstrated that the final covariance matrix of the resonance parameters not only strongly depends on the type of experimental observables used in the adjustment process, the experimental conditions and the characteristics of the resonance structure, but also on the method that is used to propagate the covariances. Finally, a special data reduction concept and format is presented, which offers the possibility to store the full covariance information of experimental data in the EXFOR library and provides the information required to perform a full covariance evaluation.

  13. Shaft adjuster

    DOEpatents

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  14. Covariance matrices for use in criticality safety predictability studies

    SciTech Connect

    Derrien, H.; Larson, N.M.; Leal, L.C.

    1997-09-01

    Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.

  15. Three Classes of Nonparametric Differential Step Functioning Effect Estimators

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2008-01-01

    The examination of measurement invariance in polytomous items is complicated by the possibility that the magnitude and sign of lack of invariance may vary across the steps underlying the set of polytomous response options, a concept referred to as differential step functioning (DSF). This article describes three classes of nonparametric DSF effect…

  16. Nonparametric identification of petrogenic and pyrogenic hydrocarbons in aquatic ecosystems.

    PubMed

    Carls, Mark G

    2006-07-01

    Novel nonparametric models developed herein discriminated between oiled and nonoiled or pyrogenic and oiled sources better than traditionally used diagnostic ratios and can outperform previously published oil identification models. These methods were compared using experimental and environmental hydrocarbon data (sediment, mussels, water, and fish) associated with the Exxon Valdez oil spill. Several nonparametric models were investigated, one designed to detect petroleum in general, one specific to Alaska North Slope crude oil (ANS), and one designed to detect pyrogenic PAH. These ideas are intended as guidance; nonparametric models can easily be adapted to fit the specific needs of a variety of petrogenic and pyrogenic sources. Oil identification was clearly difficult where composition was modified by physical or biological processes; model results differed most in these cases, suggesting that a multiple model approach to source discrimination may be useful where data interpretation is contentious. However, a combined nonparametric model best described a broad range of hydrocarbon sources, thus providing a useful new analytical assessment tool. PMID:16856740

  17. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  18. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  19. A Unifying Framework for Teaching Nonparametric Statistical Tests

    ERIC Educational Resources Information Center

    Bargagliotti, Anna E.; Orrison, Michael E.

    2014-01-01

    Increased importance is being placed on statistics at both the K-12 and undergraduate level. Research divulging effective methods to teach specific statistical concepts is still widely sought after. In this paper, we focus on best practices for teaching topics in nonparametric statistics at the undergraduate level. To motivate the work, we…

  20. Coefficients of Association Analogous to Pearson's r for Nonparametric Statistics.

    ERIC Educational Resources Information Center

    Stavig, Gordon; Acock, Alan C.

    1980-01-01

    Two r coefficients of association are discussed. One of the coefficients can be applied to any nonparametric test statistic (NTS) in which a normal approximation equation is appropriate. The other coefficient is applicable to any NTS in which exact probabilities are known. (Author/RL)

  1. Statistical Evidence in Salary Discrimination Studies: Nonparametric Inferential Conditions.

    ERIC Educational Resources Information Center

    Millsap, Roger E.; Meredith, William

    1994-01-01

    Theoretical nonparametric conditions under which evidence from salary studies using observed merit measures can provide a basis for inferences of fairness are discussed. Latent variable models as parametric special cases of the general conditions presented are illustrated with real salary data. Implications for empirical studies of salary equity…

  2. A Simulation Comparison of Parametric and Nonparametric Dimensionality Detection Procedures

    ERIC Educational Resources Information Center

    Mroch, Andrew A.; Bolt, Daniel M.

    2006-01-01

    Recently, nonparametric methods have been proposed that provide a dimensionally based description of test structure for tests with dichotomous items. Because such methods are based on different notions of dimensionality than are assumed when using a psychometric model, it remains unclear whether these procedures might lead to a different…

  3. Estimation of Spatial Dynamic Nonparametric Durbin Models with Fixed Effects

    ERIC Educational Resources Information Center

    Qian, Minghui; Hu, Ridong; Chen, Jianwei

    2016-01-01

    Spatial panel data models have been widely studied and applied in both scientific and social science disciplines, especially in the analysis of spatial influence. In this paper, we consider the spatial dynamic nonparametric Durbin model (SDNDM) with fixed effects, which takes the nonlinear factors into account base on the spatial dynamic panel…

  4. Adjusted Rasch person-fit statistics.

    PubMed

    Dimitrov, Dimiter M; Smith, Richard M

    2006-01-01

    Two frequently used parametric statistics of person-fit with the dichotomous Rasch model (RM) are adjusted and compared to each other and to their original counterparts in terms of power to detect aberrant response patterns in short tests (10, 20, and 30 items). Specifically, the cube root transformation of the mean square for the unweighted person-fit statistic, t, and the standardized likelihood-based person-fit statistic Z3 were adjusted by estimating the probability for correct item response through the use of symmetric functions in the dichotomous Rasch model. The results for simulated unidimensional Rasch data indicate that t and Z3 are consistently, yet not greatly, outperformed by their adjusted counterparts, denoted t* and Z3*, respectively. The four parametric statistics, t, Z3, t*, and Z3*, were also compared to a non-parametric statistic, HT, identified in recent research as outperforming numerous parametric and non-parametric person-fit statistics. The results show that HT substantially outperforms t, Z3, t*, and Z3* in detecting aberrant response patterns for 20-item and 30-item tests, but not for very short tests of 10 items. The detection power of t, Z3, t*, and Z3*, and HT at two specific levels of Type I error, .10 and .05 (i.e., up to 10% and 5% false alarm rate, respectively), is also reported. PMID:16632900

  5. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  6. Topics in data adjustment theory and applications

    SciTech Connect

    Hwang, R.N.

    1988-01-01

    The methodologies of the uncertainty analysis and data adjustment have been well-developed and widely used abroad since the early 70's. With limited amount of covariance data on the differential cross section and the integral experiments available at the time, their accomplishments are, indeed, astounding. The fundamental adjustment equations, however, remain qualitatively unchanged. For the past few year, extensive efforts on these subjects have also begun at ANL in order to utilize the massive amount of integral experiments accumulated over years to provide the basis for improving the reactor parameters encountered in various design calculations. Pertinent covariance matrices and sensitivity matrices of the existing integral experiments have been evaluated and systematically compiled in the data files along with the cross section covariance data derived from the ENDF-B/V for the 21 group structure currently under consideration. A production code GMADJ that provides the adjusted quantities for a large number of cross section types has been developed by Poenitz for routine applications. The primary purpose of the present paper is to improve understanding of the application oriented issues important to the data adjustment theory and the subsequent usage of the adjusted quantities in the design calculations in support of these activities. 30 refs., 12 figs., 5 tabs.

  7. Connecting Math and Motion: A Covariational Approach

    NASA Astrophysics Data System (ADS)

    Culbertson, Robert J.; Thompson, A. S.

    2006-12-01

    We define covariational reasoning as the ability to correlate changes in two connected variables. For example, the ability to describe the height of fluid in an odd-shaped vessel as a function of fluid volume requires covariational reasoning skills. Covariational reasoning ability is an essential resource for gaining a deep understanding of the physics of motion. We have developed an approach for teaching physical science to in-service math and science high school teachers that emphasizes covariational reasoning. Several examples of covariation and results from a small cohort of local teachers will be presented.

  8. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  9. A Nonparametric Simulator for Multivariate Random Variables with Differing Marginal Densities and Non-linear Dependence with Hydroclimatic Applications

    NASA Astrophysics Data System (ADS)

    Farnham, D.; Lall, U.; Devineni, N.; Rahill-Marier, B.

    2013-12-01

    Hydrologic models often require as inputs stochastic simulations of meteorological variables that are mutually consistent and spatially coherent, i.e., have marginal and joint probability densities that correspond to those estimated from physically realized states, and have the appropriate spatial structure. These inputs may come from historical meteorological data, or from relatively small ensembles of integrations of numerical climate and weather models. Often, empirical modeling or simulation of multiple hydroclimatic variables, or simulations of hydrologic variables at multiple sites that respect the spatial co-variability may also be desired. A nonparametric simulation strategy is presented that is capable of 1) addressing marginal probability density functions that are different for each variable of interest and 2) reproducing the joint probability distribution across a potentially large set of variables or spatial instances. The application of rainfall simulations developed from historic rain gauge and radar data is explored. Such simulations are useful for urban hydrological modelers seeking more spatially resolved precipitation forcings.

  10. Parametric approaches to quality-adjusted survival analysis. International Breast Cancer Study Group.

    PubMed

    Cole, B F; Gelber, R D; Anderson, K M

    1994-09-01

    We present a parametric methodology for performing quality-of-life-adjusted survival analysis using multivariate censored survival data. It represents a generalization of the nonparametric Q-TWiST method (Quality-adjusted Time without Symptoms and Toxicity). The event times correspond to transitions between states of health that differ in terms of quality of life. Each transition is governed by a competing risks model where the health states are the competing risks. Overall survival is the sum of the amount of time spent in each health state. The first step of the proposed methodology consists of defining a quality function that assigns a "score" to a life having given health state transitions. It is a composite measure of both quantity and quality of life. In general, the quality function assigns a small value to a short life with poor quality and a high value to a long life with good quality. In the second step, parametric survival models are fit to the data. This is done by repeatedly modeling the conditional cause-specific hazard functions given the previous transitions. Covariates are incorporated by accelerated failure time regression, and the model parameters are estimated by maximum likelihood. Lastly, the modeling results are used to estimate the expectation of quality functions. Standard errors and confidence intervals are computed using the bootstrap and delta methods. The results are useful for simultaneously evaluating treatments in terms of quantity and quality of life. To demonstrate the proposed methods, we perform an analysis of data from the International Breast Cancer Study Group Trial V, which compared short-duration chemotherapy versus long-duration chemotherapy in the treatment of node-positive breast cancer. The events studied are: (1) the end of treatment toxicity, (2) disease recurrence, and (3) overall survival. PMID:7981389

  11. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  12. Bayesian Nonparametric Estimation of Targeted Agent Effects on Biomarker Change to Predict Clinical Outcome

    PubMed Central

    Graziani, Rebecca; Guindani, Michele; Thall, Peter F.

    2015-01-01

    Summary The effect of a targeted agent on a cancer patient's clinical outcome putatively is mediated through the agent's effect on one or more early biological events. This is motivated by pre-clinical experiments with cells or animals that identify such events, represented by binary or quantitative biomarkers. When evaluating targeted agents in humans, central questions are whether the distribution of a targeted biomarker changes following treatment, the nature and magnitude of this change, and whether it is associated with clinical outcome. Major difficulties in estimating these effects are that a biomarker's distribution may be complex, vary substantially between patients, and have complicated relationships with clinical outcomes. We present a probabilistically coherent framework for modeling and estimation in this setting, including a hierarchical Bayesian nonparametric mixture model for biomarkers that we use to define a functional profile of pre-versus-post treatment biomarker distribution change. The functional is similar to the receiver operating characteristic used in diagnostic testing. The hierarchical model yields clusters of individual patient biomarker profile functionals, and we use the profile as a covariate in a regression model for clinical outcome. The methodology is illustrated by analysis of a dataset from a clinical trial in prostate cancer using imatinib to target platelet-derived growth factor, with the clinical aim to improve progression-free survival time. PMID:25319212

  13. Nonparametric estimation receiver operating characteristic analysis for performance evaluation on combined detection and estimation tasks

    PubMed Central

    Wunderlich, Adam; Goossens, Bart

    2014-01-01

    Abstract. In an effort to generalize task-based assessment beyond traditional signal detection, there is a growing interest in performance evaluation for combined detection and estimation tasks, in which signal parameters, such as size, orientation, and contrast are unknown and must be estimated. One motivation for studying such tasks is their rich complexity, which offers potential advantages for imaging system optimization. To evaluate observer performance on combined detection and estimation tasks, Clarkson introduced the estimation receiver operating characteristic (EROC) curve and the area under the EROC curve as a summary figure of merit. This work provides practical tools for EROC analysis of experimental data. In particular, we propose nonparametric estimators for the EROC curve, the area under the EROC curve, and for the variance/covariance matrix of a vector of correlated EROC area estimates. In addition, we show that reliable confidence intervals can be obtained for EROC area, and we validate these intervals with Monte Carlo simulation. Application of our methodology is illustrated with an example comparing magnetic resonance imaging k-space sampling trajectories. MATLAB® software implementing the EROC analysis estimators described in this work is publicly available at http://code.google.com/p/iqmodelo/. PMID:26158044

  14. Bayesian nonparametric estimation of targeted agent effects on biomarker change to predict clinical outcome.

    PubMed

    Graziani, Rebecca; Guindani, Michele; Thall, Peter F

    2015-03-01

    The effect of a targeted agent on a cancer patient's clinical outcome putatively is mediated through the agent's effect on one or more early biological events. This is motivated by pre-clinical experiments with cells or animals that identify such events, represented by binary or quantitative biomarkers. When evaluating targeted agents in humans, central questions are whether the distribution of a targeted biomarker changes following treatment, the nature and magnitude of this change, and whether it is associated with clinical outcome. Major difficulties in estimating these effects are that a biomarker's distribution may be complex, vary substantially between patients, and have complicated relationships with clinical outcomes. We present a probabilistically coherent framework for modeling and estimation in this setting, including a hierarchical Bayesian nonparametric mixture model for biomarkers that we use to define a functional profile of pre-versus-post-treatment biomarker distribution change. The functional is similar to the receiver operating characteristic used in diagnostic testing. The hierarchical model yields clusters of individual patient biomarker profile functionals, and we use the profile as a covariate in a regression model for clinical outcome. The methodology is illustrated by analysis of a dataset from a clinical trial in prostate cancer using imatinib to target platelet-derived growth factor, with the clinical aim to improve progression-free survival time. PMID:25319212

  15. Nonparametric estimation receiver operating characteristic analysis for performance evaluation on combined detection and estimation tasks.

    PubMed

    Wunderlich, Adam; Goossens, Bart

    2014-10-01

    In an effort to generalize task-based assessment beyond traditional signal detection, there is a growing interest in performance evaluation for combined detection and estimation tasks, in which signal parameters, such as size, orientation, and contrast are unknown and must be estimated. One motivation for studying such tasks is their rich complexity, which offers potential advantages for imaging system optimization. To evaluate observer performance on combined detection and estimation tasks, Clarkson introduced the estimation receiver operating characteristic (EROC) curve and the area under the EROC curve as a summary figure of merit. This work provides practical tools for EROC analysis of experimental data. In particular, we propose nonparametric estimators for the EROC curve, the area under the EROC curve, and for the variance/covariance matrix of a vector of correlated EROC area estimates. In addition, we show that reliable confidence intervals can be obtained for EROC area, and we validate these intervals with Monte Carlo simulation. Application of our methodology is illustrated with an example comparing magnetic resonance imaging [Formula: see text]-space sampling trajectories. MATLAB® software implementing the EROC analysis estimators described in this work is publicly available at http://code.google.com/p/iqmodelo/. PMID:26158044

  16. Nonlinear and multiresolution error covariance estimation in ensemble data assimilation

    NASA Astrophysics Data System (ADS)

    Rainwater, Sabrina

    Ensemble Kalman Filters perform data assimilation by forming a background covariance matrix from an ensemble forecast. The spread of the ensemble is intended to represent the algorithm's uncertainty about the state of the physical system that produces the data. Usually the ensemble members are evolved with the same model. The first part of my dissertation presents and tests a modified Local Ensemble Transform Kalman Filter (LETKF) that takes its background covariance from a combination of a high resolution ensemble and a low resolution ensemble. The computational time and the accuracy of this mixed-resolution LETKF are explored and compared to the standard LETKF on a high resolution ensemble, using simulated observation experiments with the Lorenz Models II and III (more complex versions of the Lorenz 96 model). The results show that, for the same computation time, mixed resolution ensemble analysis achieves higher accuracy than standard ensemble analysis. The second part of my dissertation demonstrates that it can be fruitful to rescale the ensemble spread prior to the forecast and then reverse this rescaling after the forecast. This technique, denoted “forecast spread adjustment'' provides a tunable parameter that is complementary to covariance inflation, which cumulatively increases the ensemble spread to compensate for underestimation of uncertainty. As the adjustable parameter approaches zero, the filter approaches the Extended Kalman Filter when the ensemble size is sufficiently large. The improvement provided by forecast spread adjustment depends on ensemble size, observation error, and model error. The results indicate that it is most effective for smaller ensembles, smaller observation errors, and larger model error, though the effectiveness depends significantly on the type of model error.

  17. A generalized framework for deriving nonparametric standardized drought indicators

    NASA Astrophysics Data System (ADS)

    Farahmand, Alireza; AghaKouchak, Amir

    2015-02-01

    This paper introduces the Standardized Drought Analysis Toolbox (SDAT) that offers a generalized framework for deriving nonparametric univariate and multivariate standardized indices. Current indicators suffer from deficiencies including temporal inconsistency, and statistical incomparability. Different indicators have varying scales and ranges and their values cannot be compared with each other directly. Most drought indicators rely on a representative parametric probability distribution function that fits the data. However, a parametric distribution function may not fit the data, especially in continental/global scale studies. SDAT is based on a nonparametric framework that can be applied to different climatic variables including precipitation, soil moisture and relative humidity, without having to assume representative parametric distributions. The most attractive feature of the framework is that it leads to statistically consistent drought indicators based on different variables.

  18. Nonparametric instrumental regression with non-convex constraints

    NASA Astrophysics Data System (ADS)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  19. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  20. A Bayesian Nonparametric Approach to Image Super-Resolution.

    PubMed

    Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid

    2015-02-01

    Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler. PMID:26353246

  1. On nonparametric comparison of images and regression surfaces

    PubMed Central

    Wang, Xiao-Feng; Ye, Deping

    2010-01-01

    Multivariate local regression is an important tool for image processing and analysis. In many practical biomedical problems, one is often interested in comparing a group of images or regression surfaces. In this paper, we extend the existing method of testing the equality of nonparametric curves by Dette and Neumeyer (2001) and consider a test statistic by means of an ℒ2-distance in the multi-dimensional case under a completely heteroscedastic nonparametric model. The test statistic is also extended to be used in the case of spatial correlated errors. Two bootstrap procedures are described in order to approximate the critical values of the test depending on the nature of random errors. The resulting algorithms and analyses are illustrated from both simulation studies and a real medical example. PMID:20543891

  2. Point matching based on non-parametric model

    NASA Astrophysics Data System (ADS)

    Liu, Renfeng; Zhang, Cong; Tian, Jinwen

    2015-12-01

    Establishing reliable feature correspondence between two images is a fundamental problem in vision analysis and it is a critical prerequisite in a wide range of applications including structure-from-motion, 3D reconstruction, tracking, image retrieval, registration, and object recognition. The feature could be point, line, curve or surface, among which the point feature is primary and is the foundation of all features. Numerous techniques related to point matching have been proposed within a rich and extensive literature, which are typically studied under rigid/affine or non-rigid motion, corresponding to parametric and non-parametric models for the underlying image relations. In this paper, we provide a review of our previous work on point matching, focusing on nonparametric models. We also make an experimental comparison of the introduced methods, and discuss their advantages and disadvantages as well.

  3. A Nonparametric Approach for Mapping Quantitative Trait Loci

    PubMed Central

    Kruglyak, L.; Lander, E. S.

    1995-01-01

    Genetic mapping of quantitative trait loci (QTLs) is performed typically by using a parametric approach, based on the assumption that the phenotype follows a normal distribution. Many traits of interest, however, are not normally distributed. In this paper, we present a nonparametric approach to QTL mapping applicable to any phenotypic distribution. The method is based on a statistic Z(w), which generalizes the nonparametric Wilcoxon rank-sum test to the situation of whole-genome search by interval mapping. We determine the appropriate significance level for the statistic Z(w), by showing that its asymptotic null distribution follows an Ornstein-Uhlenbeck process. These results provide a robust, distribution-free method for mapping QTLs. PMID:7768449

  4. Supervised nonparametric sparse discriminant analysis for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Wu, Longfei; Sun, Hao; Ji, Kefeng

    2016-03-01

    Owing to the high spectral sampling, the spectral information in hyperspectral imagery (HSI) is often highly correlated and contains redundancy. Motivated by the recent success of sparsity preserving based dimensionality reduction (DR) techniques in both computer vision and remote sensing image analysis community, a novel supervised nonparametric sparse discriminant analysis (NSDA) algorithm is presented for HSI classification. The objective function of NSDA aims at preserving the within-class sparse reconstructive relationship for within-class compactness characterization and maximizing the nonparametric between-class scatter simultaneously to enhance discriminative ability of the features in the projected space. Essentially, it seeks for the optimal projection matrix to identify the underlying discriminative manifold structure of a multiclass dataset. Experimental results on one visualization dataset and three recorded HSI dataset demonstrate NSDA outperforms several state-of-the-art feature extraction methods for HSI classification.

  5. Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.

    PubMed

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2015-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor… PMID:25592565

  6. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  7. Nonparametric estimation of Fisher information from real data

    NASA Astrophysics Data System (ADS)

    Har-Shemesh, Omri; Quax, Rick; Miñano, Borja; Hoekstra, Alfons G.; Sloot, Peter M. A.

    2016-02-01

    The Fisher information matrix (FIM) is a widely used measure for applications including statistical inference, information geometry, experiment design, and the study of criticality in biological systems. The FIM is defined for a parametric family of probability distributions and its estimation from data follows one of two paths: either the distribution is assumed to be known and the parameters are estimated from the data or the parameters are known and the distribution is estimated from the data. We consider the latter case which is applicable, for example, to experiments where the parameters are controlled by the experimenter and a complicated relation exists between the input parameters and the resulting distribution of the data. Since we assume that the distribution is unknown, we use a nonparametric density estimation on the data and then compute the FIM directly from that estimate using a finite-difference approximation to estimate the derivatives in its definition. The accuracy of the estimate depends on both the method of nonparametric estimation and the difference Δ θ between the densities used in the finite-difference formula. We develop an approach for choosing the optimal parameter difference Δ θ based on large deviations theory and compare two nonparametric density estimation methods, the Gaussian kernel density estimator and a novel density estimation using field theory method. We also compare these two methods to a recently published approach that circumvents the need for density estimation by estimating a nonparametric f divergence and using it to approximate the FIM. We use the Fisher information of the normal distribution to validate our method and as a more involved example we compute the temperature component of the FIM in the two-dimensional Ising model and show that it obeys the expected relation to the heat capacity and therefore peaks at the phase transition at the correct critical temperature.

  8. Parameter inference with estimated covariance matrices

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heavens, Alan F.

    2016-02-01

    When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.

  9. A comparison of confounding adjustment methods with an application to early life determinants of childhood obesity.

    PubMed

    Li, L; Kleinman, K; Gillman, M W

    2014-12-01

    We implemented six confounding adjustment methods: (1) covariate-adjusted regression, (2) propensity score (PS) regression, (3) PS stratification, (4) PS matching with two calipers, (5) inverse probability weighting and (6) doubly robust estimation to examine the associations between the body mass index (BMI) z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding v. formula only (n=437) and cesarean section v. vaginal delivery (n=1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were -0.33 (95% CI -0.53, -0.13) and -0.24 (-0.46, -0.02), respectively. The other approaches resulted in smaller n (204-276) because of poor overlap of covariates, but CIs were of similar width except for inverse probability weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from -0.01 to -0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method. PMID:25171142

  10. A comparison of confounding adjustment methods with an application to early life determinants of childhood obesity

    PubMed Central

    Kleinman, Ken; Gillman, Matthew W.

    2014-01-01

    We implemented 6 confounding adjustment methods: 1) covariate-adjusted regression, 2) propensity score (PS) regression, 3) PS stratification, 4) PS matching with two calipers, 5) inverse-probability-weighting, and 6) doubly-robust estimation to examine the associations between the BMI z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding versus formula only (N = 437) and cesarean section versus vaginal delivery (N = 1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were −0.33 (95% CI −0.53, −0.13) and −0.24 (−0.46, −0.02), respectively. The other approaches resulted in smaller N (204 to 276) because of poor overlap of covariates, but CIs were of similar width except for inverse-probability-weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from −0.01 to −0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method. PMID:25171142

  11. COVARIANCE ASSISTED SCREENING AND ESTIMATION

    PubMed Central

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-01-01

    Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X′X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model. PMID:25541567

  12. Nonparametric hemodynamic deconvolution of FMRI using homomorphic filtering.

    PubMed

    Sreenivasan, Karthik Ramakrishnan; Havlicek, Martin; Deshpande, Gopikrishna

    2015-05-01

    Functional magnetic resonance imaging (fMRI) is an indirect measure of neural activity which is modeled as a convolution of the latent neuronal response and the hemodynamic response function (HRF). Since the sources of HRF variability can be nonneural in nature, the measured fMRI signal does not faithfully represent underlying neural activity. Therefore, it is advantageous to deconvolve the HRF from the fMRI signal. However, since both latent neural activity and the voxel-specific HRF is unknown, the deconvolution must be blind. Existing blind deconvolution approaches employ highly parameterized models, and it is unclear whether these models have an over fitting problem. In order to address these issues, we 1) present a nonparametric deconvolution method based on homomorphic filtering to obtain the latent neuronal response from the fMRI signal and, 2) compare our approach to the best performing existing parametric model based on the estimation of the biophysical hemodynamic model using the Cubature Kalman Filter/Smoother. We hypothesized that if the results from nonparametric deconvolution closely resembled that obtained from parametric deconvolution, then the problem of over fitting during estimation in highly parameterized deconvolution models of fMRI could possibly be over stated. Both simulations and experimental results demonstrate support for our hypothesis since the estimated latent neural response from both parametric and nonparametric methods were highly correlated in the visual cortex. Further, simulations showed that both methods were effective in recovering the simulated ground truth of the latent neural response. PMID:25531878

  13. Nonparametric Analysis of Bivariate Gap Time with Competing Risks

    PubMed Central

    Huang, Chiung-Yu; Wang, Chenguang; Wang, Mei-Cheng

    2016-01-01

    Summary This article considers nonparametric methods for studying recurrent disease and death with competing risks. We first point out that comparisons based on the well-known cumulative incidence function can be confounded by different prevalence rates of the competing events, and that comparisons of the conditional distribution of the survival time given the failure event type are more relevant for investigating the prognosis of different patterns of recurrence disease. We then propose nonparametric estimators for the conditional cumulative incidence function as well as the conditional bivariate cumulative incidence function for the bivariate gap times, that is, the time to disease recurrence and the residual lifetime after recurrence. To quantify the association between the two gap times in the competing risks setting, a modified Kendall’s tau statistic is proposed. The proposed estimators for the conditional bivariate cumulative incidence distribution and the association measure account for the induced dependent censoring for the second gap time. Uniform consistency and weak convergence of the proposed estimators are established. Hypothesis testing procedures for two-sample comparisons are discussed. Numerical simulation studies with practical sample sizes are conducted to evaluate the performance of the proposed nonparametric estimators and tests. An application to data from a pancreatic cancer study is presented to illustrate the methods developed in this article. PMID:26990686

  14. Covariance-enhanced discriminant analysis

    PubMed Central

    XU, PEIRONG; ZHU, JI; ZHU, LIXING; LI, YI

    2016-01-01

    Summary Linear discriminant analysis has been widely used to characterize or separate multiple classes via linear combinations of features. However, the high dimensionality of features from modern biological experiments defies traditional discriminant analysis techniques. Possible interfeature correlations present additional challenges and are often underused in modelling. In this paper, by incorporating possible interfeature correlations, we propose a covariance-enhanced discriminant analysis method that simultaneously and consistently selects informative features and identifies the corresponding discriminable classes. Under mild regularity conditions, we show that the method can achieve consistent parameter estimation and model selection, and can attain an asymptotically optimal misclassification rate. Extensive simulations have verified the utility of the method, which we apply to a renal transplantation trial.

  15. Nonparametric methods for microarray data based on exchangeability and borrowed power.

    PubMed

    Lee, Mei-Ling Ting; Whitmore, G A; Björkbacka, Harry; Freeman, Mason W

    2005-01-01

    This article proposes nonparametric inference procedures for analyzing microarray gene expression data that are reliable, robust, and simple to implement. They are conceptually transparent and require no special-purpose software. The analysis begins by normalizing gene expression data in a unique way. The resulting adjusted observations consist of gene-treatment interaction terms (representing differential expression) and error terms. The error terms are considered to be exchangeable, which is the only substantial assumption. Thus, under a family null hypothesis of no differential expression, the adjusted observations are exchangeable and all permutations of the observations are equally probable. The investigator may use the adjusted observations directly in a distribution-free test method or use their ranks in a rank-based method, where the ranking is taken over the whole data set. For the latter, the essential steps are as follows: (1) Calculate a Wilcoxon rank-sum difference or a corresponding Kruskal-Wallis rank statistic for each gene. (2) Randomly permute the observations and repeat the previous step. (3) Independently repeat the random permutation a suitable number of times. Under the exchangeability assumption, the permutation statistics are independent random draws from a null cumulative distribution function (c.d.f) approximated by the empirical c.d.f Reference to the empirical c.d.f tells if the test statistic for a gene is outlying and, hence, shows differential expression. This feature is judged by using an appropriate rejection region or computing a p-value for each test statistic, taking into account multiple testing. The distribution-free analog of the rank-based approach is also available and has parallel steps which are described in the article. The proposed nonparametric analysis tends to give good results with no additional refinement, although a few refinements are presented that may interest some investigators. The implementation is

  16. Assessing Covariates of Adolescent Delinquency Trajectories: A Latent Growth Mixture Modeling Approach.

    ERIC Educational Resources Information Center

    Wiesner, Margit; Windle, Michael

    2004-01-01

    Using data from a community sample of 1218 boys and girls (mean age at the first wave was 15.5 years), this longitudinal study examined several covariates -- adjustment problems, poor academic achievement, negative life events, and unsupportive family environments -- of distinctive trajectories of juvenile delinquency. Latent growth mixture…

  17. Particle emission from covariant phase space

    SciTech Connect

    Bambah, B.A. )

    1992-12-01

    Using Lorentz-covariant sources, we calculate the multiplicity distribution of {ital n} pair correlated particles emerging from a Lorentz-covariant phase-space volume. We use the Kim-Wigner formalism and identify these sources as the squeezed states of a relativistic harmonic oscillator. The applications of this to multiplicity distributions in particle physics is discussed.

  18. Group Theory of Covariant Harmonic Oscillators

    ERIC Educational Resources Information Center

    Kim, Y. S.; Noz, Marilyn E.

    1978-01-01

    A simple and concrete example for illustrating the properties of noncompact groups is presented. The example is based on the covariant harmonic-oscillator formalism in which the relativistic wave functions carry a covariant-probability interpretation. This can be used in a group theory course for graduate students who have some background in…

  19. Quality Quantification of Evaluated Cross Section Covariances

    SciTech Connect

    Varet, S.; Dossantos-Uzarralde, P.

    2015-01-15

    Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.

  20. REGRESSION METHODS FOR DATA WITH INCOMPLETE COVARIATES

    EPA Science Inventory

    Modern statistical methods in chronic disease epidemiology allow simultaneous regression of disease status on several covariates. hese methods permit examination of the effects of one covariate while controlling for those of others that may be causally related to the disease. owe...

  1. To adjust or not to adjust for baseline when analyzing repeated binary responses? The case of complete data when treatment comparison at study end is of interest.

    PubMed

    Jiang, Honghua; Kulkarni, Pandurang M; Mallinckrodt, Craig H; Shurzinske, Linda; Molenberghs, Geert; Lipkovich, Ilya

    2015-01-01

    The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi-square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi-squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. PMID:25866149

  2. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  3. Combined Use of Integral Experiments and Covariance Data

    NASA Astrophysics Data System (ADS)

    Palmiotti, G.; Salvatores, M.; Aliberti, G.; Herman, M.; Hoblit, S. D.; McKnight, R. D.; Obložinský, P.; Talou, P.; Hale, G. M.; Hiruta, H.; Kawano, T.; Mattoon, C. M.; Nobre, G. P. A.; Palumbo, A.; Pigni, M.; Rising, M. E.; Yang, W.-S.; Kahler, A. C.

    2014-04-01

    In the frame of a US-DOE sponsored project, ANL, BNL, INL and LANL have performed a joint multidisciplinary research activity in order to explore the combined use of integral experiments and covariance data with the objective to both give quantitative indications on possible improvements of the ENDF evaluated data files and to reduce at the same time crucial reactor design parameter uncertainties. Methods that have been developed in the last four decades for the purposes indicated above have been improved by some new developments that benefited also by continuous exchanges with international groups working in similar areas. The major new developments that allowed significant progress are to be found in several specific domains: a) new science-based covariance data; b) integral experiment covariance data assessment and improved experiment analysis, e.g., of sample irradiation experiments; c) sensitivity analysis, where several improvements were necessary despite the generally good understanding of these techniques, e.g., to account for fission spectrum sensitivity; d) a critical approach to the analysis of statistical adjustments performance, both a priori and a posteriori; e) generalization of the assimilation method, now applied for the first time not only to multigroup cross sections data but also to nuclear model parameters (the "consistent" method). This article describes the major results obtained in each of these areas; a large scale nuclear data adjustment, based on the use of approximately one hundred high-accuracy integral experiments, will be reported along with a significant example of the application of the new "consistent" method of data assimilation.

  4. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Barat, Éric; Dautremer, Thomas

    2007-11-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution—normalized emission intensity of the spatial poisson process—is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on Rk (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM.

  5. Generalized additive models with interval-censored data and time-varying covariates: application to human immunodeficiency virus infection in hemophiliacs.

    PubMed

    Bacchetti, Peter; Quale, Christopher

    2002-06-01

    We describe a method for extending smooth nonparametric modeling methods to time-to-event data where the event may be known only to lie within a window of time. Maximum penalized likelihood is used to fit a discrete proportional hazards model that also models the baseline hazard, and left-truncation and time-varying covariates are accommodated. The implementation follows generalized additive modeling conventions, allowing both parametric and smooth terms and specifying the amount of smoothness in terms of the effective degrees of freedom. We illustrate the method on a well-known interval-censored data set on time of human immunodeficiency virus infection in a multicenter study of hemophiliacs. The ability to examine time-varying covariates, not available with previous methods, allows detection and modeling of nonproportional hazards and use of a time-varying covariate that fits the data better and is more plausible than a fixed alternative. PMID:12071419

  6. Approach to nonparametric cooperative multiband segmentation with adaptive threshold.

    PubMed

    Sebari, Imane; He, Dong-Chen

    2009-07-10

    We present a new nonparametric cooperative approach to multiband image segmentation. It is based on cooperation between region-growing segmentation and edge segmentation. This approach requires no input data other than the images to be processed. It uses a spectral homogeneity criterion whose threshold is determined automatically. The threshold is adaptive and varies depending on the objects to be segmented. Applying this new approach to very high resolution satellite imagery has yielded satisfactory results. The approach demonstrated its performance on images of varied complexity and was able to detect objects of great spatial and spectral heterogeneity. PMID:19593349

  7. Computation of nonparametric convex hazard estimators via profile methods

    PubMed Central

    Jankowski, Hanna K.; Wellner, Jon A.

    2010-01-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females. PMID:20300560

  8. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  9. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  10. Assessment of water quality trends in the Minnesota River using non-parametric and parametric methods

    USGS Publications Warehouse

    Johnson, H.O.; Gupta, S.C.; Vecchia, A.V.; Zvomuya, F.

    2009-01-01

    Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO3-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO3-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  11. Doubly robust and efficient estimators for heteroscedastic partially linear single-index models allowing high dimensional covariates

    PubMed Central

    Ma, Yanyuan; Zhu, Liping

    2013-01-01

    Summary We study the heteroscedastic partially linear single-index model with an unspecified error variance function, which allows for high dimensional covariates in both the linear and the single-index components of the mean function. We propose a class of consistent estimators of the parameters by using a proper weighting strategy. An interesting finding is that the linearity condition which is widely assumed in the dimension reduction literature is not necessary for methodological or theoretical development: it contributes only to the simplification of non-optimal consistent estimation. We also find that the performance of the usual weighted least square type of estimators deteriorates when the non-parametric component is badly estimated. However, estimators in our family automatically provide protection against such deterioration, in that the consistency can be achieved even if the baseline non-parametric function is completely misspecified. We further show that the most efficient estimator is a member of this family and can be easily obtained by using non-parametric estimation. Properties of the estimators proposed are presented through theoretical illustration and numerical simulations. An example on gender discrimination is used to demonstrate and to compare the practical performance of the estimators. PMID:23970823

  12. Estimation of Data Uncertainty Adjustment Parameters for Multivariate Earth Rotation Series

    NASA Technical Reports Server (NTRS)

    Sung, Li-yu; Steppe, J. Alan

    1994-01-01

    We have developed a maximum likelihood method to estimate a set of data uncertainty adjustment parameters, iccluding scaling factors and additive variances and covariances, for multivariate Earth rotation series.

  13. Nonparametric meta-analysis for diagnostic accuracy studies.

    PubMed

    Zapf, Antonia; Hoyer, Annika; Kramer, Katharina; Kuss, Oliver

    2015-12-20

    Summarizing the information of many studies using a meta-analysis becomes more and more important, also in the field of diagnostic studies. The special challenge in meta-analysis of diagnostic accuracy studies is that in general sensitivity and specificity are co-primary endpoints. Across the studies both endpoints are correlated, and this correlation has to be considered in the analysis. The standard approach for such a meta-analysis is the bivariate logistic random effects model. An alternative approach is to use marginal beta-binomial distributions for the true positives and the true negatives, linked by copula distributions. In this article, we propose a new, nonparametric approach of analysis, which has greater flexibility with respect to the correlation structure, and always converges. In a simulation study, it becomes apparent that the empirical coverage of all three approaches is in general below the nominal level. Regarding bias, empirical coverage, and mean squared error the nonparametric model is often superior to the standard model, and comparable with the copula model. The three approaches are also applied to two example meta-analyses. PMID:26174020

  14. A fuzzy, nonparametric segmentation framework for DTI and MRI analysis.

    PubMed

    Awate, Suyash P; Gee, James C

    2007-01-01

    This paper presents a novel statistical fuzzy-segmentation method for diffusion tensor (DT) images and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g. those based on fuzzy-C-means (FCM), incorporate Gaussian class models which are inherently biased towards ellipsoidal clusters. Fiber bundles in DT images, however, comprise tensors that can inherently lie on more-complex manifolds. Unlike FCM-based schemes, the proposed method relies on modeling the manifolds underlying the classes by incorporating nonparametric data-driven statistical models. It produces an optimal fuzzy segmentation by maximizing a novel information-theoretic energy in a Markov-random-field framework. For DT images, the paper describes a consistent statistical technique for nonparametric modeling in Riemannian DT spaces that incorporates two very recent works. In this way, the proposed method provides uncertainties in the segmentation decisions, which stem from imaging artifacts including noise, partial voluming, and inhomogeneity. The paper shows results on synthetic and real, DT as well as MR images. PMID:17633708

  15. Covariation bias in panic-prone individuals.

    PubMed

    Pauli, P; Montoya, P; Martz, G E

    1996-11-01

    Covariation estimates between fear-relevant (FR; emergency situations) or fear-irrelevant (FI; mushrooms and nudes) stimuli and an aversive outcome (electrical shock) were examined in 10 high-fear (panic-prone) and 10 low-fear respondents. When the relation between slide category and outcome was random (illusory correlation), only high-fear participants markedly overestimated the contingency between FR slides and shocks. However, when there was a high contingency of shocks following FR stimuli (83%) and a low contingency of shocks following FI stimuli (17%), the group difference vanished. Reversal of contingencies back to random induced a covariation bias for FR slides in high- and low-fear respondents. Results indicate that panic-prone respondents show a covariation bias for FR stimuli and that the experience of a high contingency between FR slides and aversive outcomes may foster such a covariation bias even in low-fear respondents. PMID:8952200

  16. Reconciling Covariances with Reliable Orbital Uncertainty

    NASA Astrophysics Data System (ADS)

    Folcik, Z.; Lue, A.; Vatsky, J.

    2011-09-01

    There is a common suspicion that formal covariances do not represent a realistic measure of orbital uncertainties. By devising metrics for measuring the representations of orbit error, we assess under what circumstances such lore is justified as well as the root cause of the discrepancy between the mathematics of orbital uncertainty and its practical implementation. We offer a scheme by which formal covariances may be adapted to be an accurate measure of orbital uncertainties and show how that adaptation performs against both simulated and real space-object data. We also apply these covariance adaptation methods to the process of observation association using many simulated and real data test cases. We demonstrate that covariance-informed observation association can be reliable, even in the case when only two tracks are available. Satellite breakup and collision event catalog maintenance could benefit from the automation made possible with these association methods.

  17. Noncommutative Gauge Theory with Covariant Star Product

    SciTech Connect

    Zet, G.

    2010-08-04

    We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.

  18. Covariant action for type IIB supergravity

    NASA Astrophysics Data System (ADS)

    Sen, Ashoke

    2016-07-01

    Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.

  19. Covariate analysis of bivariate survival data

    SciTech Connect

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  20. Phase-covariant quantum cloning of qudits

    SciTech Connect

    Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin

    2003-02-01

    We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation.

  1. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  2. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  3. Nonparametric Estimation of Item and Respondent Locations from Unfolding-Type Items

    ERIC Educational Resources Information Center

    Johnson, Matthew S.

    2006-01-01

    Unlike their monotone counterparts, nonparametric unfolding response models, which assume the item response function is unimodal, have seen little attention in the psychometric literature. This paper studies the nonparametric behavior of unfolding models by building on the work of Post (1992). The paper provides rigorous justification for a class…

  4. Gradient-based manipulation of nonparametric entropy estimates.

    PubMed

    Schraudolph, Nicol N

    2004-07-01

    This paper derives a family of differential learning rules that optimize the Shannon entropy at the output of an adaptive system via kernel density estimation. In contrast to parametric formulations of entropy, this nonparametric approach assumes no particular functional form of the output density. We address problems associated with quantized data and finite sample size, and implement efficient maximum likelihood techniques for optimizing the regularizer. We also develop a normalized entropy estimate that is invariant with respect to affine transformations, facilitating optimization of the shape, rather than the scale, of the output density. Kernel density estimates are smooth and differentiable; this makes the derived entropy estimates amenable to manipulation by gradient descent. The resulting weight updates are surprisingly simple and efficient learning rules that operate on pairs of input samples. They can be tuned for data-limited or memory-limited situations, or modified to give a fully online implementation. PMID:15461076

  5. Nonparametric forecasting of low-dimensional dynamical systems

    NASA Astrophysics Data System (ADS)

    Berry, Tyrus; Giannakis, Dimitrios; Harlim, John

    2015-03-01

    This paper presents a nonparametric modeling approach for forecasting stochastic dynamical systems on low-dimensional manifolds. The key idea is to represent the discrete shift maps on a smooth basis which can be obtained by the diffusion maps algorithm. In the limit of large data, this approach converges to a Galerkin projection of the semigroup solution to the underlying dynamics on a basis adapted to the invariant measure. This approach allows one to quantify uncertainties (in fact, evolve the probability distribution) for nontrivial dynamical systems with equation-free modeling. We verify our approach on various examples, ranging from an inhomogeneous anisotropic stochastic differential equation on a torus, the chaotic Lorenz three-dimensional model, and the Niño-3.4 data set which is used as a proxy of the El Niño Southern Oscillation.

  6. Nonparametric Facial Feature Localization Using Segment-Based Eigenfeatures.

    PubMed

    Choi, Hyun-Chul; Sibbing, Dominik; Kobbelt, Leif

    2016-01-01

    We present a nonparametric facial feature localization method using relative directional information between regularly sampled image segments and facial feature points. Instead of using any iterative parameter optimization technique or search algorithm, our method finds the location of facial feature points by using a weighted concentration of the directional vectors originating from the image segments pointing to the expected facial feature positions. Each directional vector is calculated by linear combination of eigendirectional vectors which are obtained by a principal component analysis of training facial segments in feature space of histogram of oriented gradient (HOG). Our method finds facial feature points very fast and accurately, since it utilizes statistical reasoning from all the training data without need to extract local patterns at the estimated positions of facial features, any iterative parameter optimization algorithm, and any search algorithm. In addition, we can reduce the storage size for the trained model by controlling the energy preserving level of HOG pattern space. PMID:26819588

  7. Analyzing multiple spike trains with nonparametric Granger causality.

    PubMed

    Nedungadi, Aatira G; Rangarajan, Govindan; Jain, Neeraj; Ding, Mingzhou

    2009-08-01

    Simultaneous recordings of spike trains from multiple single neurons are becoming commonplace. Understanding the interaction patterns among these spike trains remains a key research area. A question of interest is the evaluation of information flow between neurons through the analysis of whether one spike train exerts causal influence on another. For continuous-valued time series data, Granger causality has proven an effective method for this purpose. However, the basis for Granger causality estimation is autoregressive data modeling, which is not directly applicable to spike trains. Various filtering options distort the properties of spike trains as point processes. Here we propose a new nonparametric approach to estimate Granger causality directly from the Fourier transforms of spike train data. We validate the method on synthetic spike trains generated by model networks of neurons with known connectivity patterns and then apply it to neurons simultaneously recorded from the thalamus and the primary somatosensory cortex of a squirrel monkey undergoing tactile stimulation. PMID:19137420

  8. Fast Nonparametric Clustering of Structured Time-Series.

    PubMed

    Hensman, James; Rattray, Magnus; Lawrence, Neil D

    2015-02-01

    In this publication, we combine two Bayesian nonparametric models: the Gaussian Process (GP) and the Dirichlet Process (DP). Our innovation in the GP model is to introduce a variation on the GP prior which enables us to model structured time-series data, i.e., data containing groups where we wish to model inter- and intra-group variability. Our innovation in the DP model is an implementation of a new fast collapsed variational inference procedure which enables us to optimize our variational approximation significantly faster than standard VB approaches. In a biological time series application we show how our model better captures salient features of the data, leading to better consistency with existing biological classifications, while the associated inference algorithm provides a significant speed-up over EM-based variational inference. PMID:26353249

  9. Nonparametric forecasting of low-dimensional dynamical systems.

    PubMed

    Berry, Tyrus; Giannakis, Dimitrios; Harlim, John

    2015-03-01

    This paper presents a nonparametric modeling approach for forecasting stochastic dynamical systems on low-dimensional manifolds. The key idea is to represent the discrete shift maps on a smooth basis which can be obtained by the diffusion maps algorithm. In the limit of large data, this approach converges to a Galerkin projection of the semigroup solution to the underlying dynamics on a basis adapted to the invariant measure. This approach allows one to quantify uncertainties (in fact, evolve the probability distribution) for nontrivial dynamical systems with equation-free modeling. We verify our approach on various examples, ranging from an inhomogeneous anisotropic stochastic differential equation on a torus, the chaotic Lorenz three-dimensional model, and the Niño-3.4 data set which is used as a proxy of the El Niño Southern Oscillation. PMID:25871180

  10. Binary Classifier Calibration Using a Bayesian Non-Parametric Approach

    PubMed Central

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.; Hauskrecht, Milos

    2015-01-01

    Learning probabilistic predictive models that are well calibrated is critical for many prediction and decision-making tasks in Data mining. This paper presents two new non-parametric methods for calibrating outputs of binary classification models: a method based on the Bayes optimal selection and a method based on the Bayesian model averaging. The advantage of these methods is that they are independent of the algorithm used to learn a predictive model, and they can be applied in a post-processing step, after the model is learned. This makes them applicable to a wide variety of machine learning models and methods. These calibration methods, as well as other methods, are tested on a variety of datasets in terms of both discrimination and calibration performance. The results show the methods either outperform or are comparable in performance to the state-of-the-art calibration methods. PMID:26613068

  11. Analyzing Single-Molecule Time Series via Nonparametric Bayesian Inference

    PubMed Central

    Hines, Keegan E.; Bankston, John R.; Aldrich, Richard W.

    2015-01-01

    The ability to measure the properties of proteins at the single-molecule level offers an unparalleled glimpse into biological systems at the molecular scale. The interpretation of single-molecule time series has often been rooted in statistical mechanics and the theory of Markov processes. While existing analysis methods have been useful, they are not without significant limitations including problems of model selection and parameter nonidentifiability. To address these challenges, we introduce the use of nonparametric Bayesian inference for the analysis of single-molecule time series. These methods provide a flexible way to extract structure from data instead of assuming models beforehand. We demonstrate these methods with applications to several diverse settings in single-molecule biophysics. This approach provides a well-constrained and rigorously grounded method for determining the number of biophysical states underlying single-molecule data. PMID:25650922

  12. Bayesian Nonparametric Shrinkage Applied to Cepheid Star Oscillations.

    PubMed

    Berger, James; Jefferys, William; Müller, Peter

    2012-01-01

    Bayesian nonparametric regression with dependent wavelets has dual shrinkage properties: there is shrinkage through a dependent prior put on functional differences, and shrinkage through the setting of most of the wavelet coefficients to zero through Bayesian variable selection methods. The methodology can deal with unequally spaced data and is efficient because of the existence of fast moves in model space for the MCMC computation. The methodology is illustrated on the problem of modeling the oscillations of Cepheid variable stars; these are a class of pulsating variable stars with the useful property that their periods of variability are strongly correlated with their absolute luminosity. Once this relationship has been calibrated, knowledge of the period gives knowledge of the luminosity. This makes these stars useful as "standard candles" for estimating distances in the universe. PMID:24368873

  13. A non-parametric segmentation methodology for oral videocapillaroscopic images.

    PubMed

    Bellavia, Fabio; Cacioppo, Antonino; Lupaşcu, Carmen Alina; Messina, Pietro; Scardina, Giuseppe; Tegolo, Domenico; Valenti, Cesare

    2014-05-01

    We aim to describe a new non-parametric methodology to support the clinician during the diagnostic process of oral videocapillaroscopy to evaluate peripheral microcirculation. Our methodology, mainly based on wavelet analysis and mathematical morphology to preprocess the images, segments them by minimizing the within-class luminosity variance of both capillaries and background. Experiments were carried out on a set of real microphotographs to validate this approach versus handmade segmentations provided by physicians. By using a leave-one-patient-out approach, we pointed out that our methodology is robust, according to precision-recall criteria (average precision and recall are equal to 0.924 and 0.923, respectively) and it acts as a physician in terms of the Jaccard index (mean and standard deviation equal to 0.858 and 0.064, respectively). PMID:24657094

  14. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable. PMID:18238122

  15. Nonparametric Facial Feature Localization Using Segment-Based Eigenfeatures

    PubMed Central

    Choi, Hyun-Chul; Sibbing, Dominik; Kobbelt, Leif

    2016-01-01

    We present a nonparametric facial feature localization method using relative directional information between regularly sampled image segments and facial feature points. Instead of using any iterative parameter optimization technique or search algorithm, our method finds the location of facial feature points by using a weighted concentration of the directional vectors originating from the image segments pointing to the expected facial feature positions. Each directional vector is calculated by linear combination of eigendirectional vectors which are obtained by a principal component analysis of training facial segments in feature space of histogram of oriented gradient (HOG). Our method finds facial feature points very fast and accurately, since it utilizes statistical reasoning from all the training data without need to extract local patterns at the estimated positions of facial features, any iterative parameter optimization algorithm, and any search algorithm. In addition, we can reduce the storage size for the trained model by controlling the energy preserving level of HOG pattern space. PMID:26819588

  16. Non-parametric transient classification using adaptive wavelets

    NASA Astrophysics Data System (ADS)

    Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.

    2015-11-01

    Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.

  17. Hyperspectral image segmentation using a cooperative nonparametric approach

    NASA Astrophysics Data System (ADS)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  18. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    SciTech Connect

    Williams, Mark L; Rearden, Bradley T

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  19. A Covariance Generation Methodology for Fission Product Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; Vallet, V.; De Saint Jean, C.; Sumini, M.

    2016-03-01

    Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1) no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM) implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation), developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.

  20. Spacetime states and covariant quantum theory

    NASA Astrophysics Data System (ADS)

    Reisenberger, Michael; Rovelli, Carlo

    2002-06-01

    In its usual presentation, classical mechanics appears to give time a very special role. But it is well known that mechanics can be formulated so as to treat the time variable on the same footing as the other variables in the extended configuration space. Such covariant formulations are natural for relativistic gravitational systems, where general covariance conflicts with the notion of a preferred physical-time variable. The standard presentation of quantum mechanics, in turn, again gives time a very special role, raising well known difficulties for quantum gravity. Is there a covariant form of (canonical) quantum mechanics? We observe that the preferred role of time in quantum theory is the consequence of an idealization: that measurements are instantaneous. Canonical quantum theory can be given a covariant form by dropping this idealization. States prepared by noninstantaneous measurements are described by ``spacetime smeared states.'' The theory can be formulated in terms of these states, without making any reference to a special time variable. The quantum dynamics is expressed in terms of the propagator, an object covariantly defined on the extended configuration space.

  1. Isavuconazole Population Pharmacokinetic Analysis Using Nonparametric Estimation in Patients with Invasive Fungal Disease (Results from the VITAL Study)

    PubMed Central

    Kovanda, Laura L.; Desai, Amit V.; Lu, Qiaoyang; Townsend, Robert W.; Akhtar, Shahzad; Bonate, Peter

    2016-01-01

    Isavuconazonium sulfate (Cresemba; Astellas Pharma Inc.), a water-soluble prodrug of the triazole antifungal agent isavuconazole, is available for the treatment of invasive aspergillosis (IA) and invasive mucormycosis. A population pharmacokinetic (PPK) model was constructed using nonparametric estimation to compare the pharmacokinetic (PK) behaviors of isavuconazole in patients treated in the phase 3 VITAL open-label clinical trial, which evaluated the efficacy and safety of the drug for treatment of renally impaired IA patients and patients with invasive fungal disease (IFD) caused by emerging molds, yeasts, and dimorphic fungi. Covariates examined were body mass index (BMI), weight, race, impact of estimated glomerular filtration rate (eGFR) on clearance (CL), and impact of weight on volume. PK parameters were compared based on IFD type and other patient characteristics. Simulations were performed to describe the MICs covered by the clinical dosing regimen. Concentrations (n = 458) from 136 patients were used to construct a 2-compartment model (first-order absorption compartment and central compartment). Weight-related covariates affected clearance, but eGFR did not. PK parameters and intersubject variability of CL were similar across different IFD groups and populations. Target attainment analyses demonstrated that the clinical dosing regimen would be sufficient for total drug area under the concentration-time curve (AUC)/MIC targets ranging from 50.5 for Aspergillus spp. (up to the CLSI MIC of 0.5 mg/liter) to 270 and 5,053 for Candida albicans (up to MICs of 0.125 and 0.004 mg/liter, respectively) and 312 for non-albicans Candida spp. (up to a MIC of 0.125 mg/liter). The estimations for Candida spp. were exploratory considering that no patients with Candida infections were included in the current analyses. (The VITAL trial is registered at ClinicalTrials.gov under number NCT00634049.) PMID:27185799

  2. Isavuconazole Population Pharmacokinetic Analysis Using Nonparametric Estimation in Patients with Invasive Fungal Disease (Results from the VITAL Study).

    PubMed

    Kovanda, Laura L; Desai, Amit V; Lu, Qiaoyang; Townsend, Robert W; Akhtar, Shahzad; Bonate, Peter; Hope, William W

    2016-08-01

    Isavuconazonium sulfate (Cresemba; Astellas Pharma Inc.), a water-soluble prodrug of the triazole antifungal agent isavuconazole, is available for the treatment of invasive aspergillosis (IA) and invasive mucormycosis. A population pharmacokinetic (PPK) model was constructed using nonparametric estimation to compare the pharmacokinetic (PK) behaviors of isavuconazole in patients treated in the phase 3 VITAL open-label clinical trial, which evaluated the efficacy and safety of the drug for treatment of renally impaired IA patients and patients with invasive fungal disease (IFD) caused by emerging molds, yeasts, and dimorphic fungi. Covariates examined were body mass index (BMI), weight, race, impact of estimated glomerular filtration rate (eGFR) on clearance (CL), and impact of weight on volume. PK parameters were compared based on IFD type and other patient characteristics. Simulations were performed to describe the MICs covered by the clinical dosing regimen. Concentrations (n = 458) from 136 patients were used to construct a 2-compartment model (first-order absorption compartment and central compartment). Weight-related covariates affected clearance, but eGFR did not. PK parameters and intersubject variability of CL were similar across different IFD groups and populations. Target attainment analyses demonstrated that the clinical dosing regimen would be sufficient for total drug area under the concentration-time curve (AUC)/MIC targets ranging from 50.5 for Aspergillus spp. (up to the CLSI MIC of 0.5 mg/liter) to 270 and 5,053 for Candida albicans (up to MICs of 0.125 and 0.004 mg/liter, respectively) and 312 for non-albicans Candida spp. (up to a MIC of 0.125 mg/liter). The estimations for Candida spp. were exploratory considering that no patients with Candida infections were included in the current analyses. (The VITAL trial is registered at ClinicalTrials.gov under number NCT00634049.). PMID:27185799

  3. FAST NEUTRON COVARIANCES FOR EVALUATED DATA FILES.

    SciTech Connect

    HERMAN, M.; OBLOZINSKY, P.; ROCHMAN, D.; KAWANO, T.; LEAL, L.

    2006-06-05

    We describe implementation of the KALMAN code in the EMPIRE system and present first covariance data generated for Gd and Ir isotopes. A complete set of covariances, in the full energy range, was produced for the chain of 8 Gadolinium isotopes for total, elastic, capture, total inelastic (MT=4), (n,2n), (n,p) and (n,alpha) reactions. Our correlation matrices, based on combination of model calculations and experimental data, are characterized by positive mid-range and negative long-range correlations. They differ from the model-generated covariances that tend to show strong positive long-range correlations and those determined solely from experimental data that result in nearly diagonal matrices. We have studied shapes of correlation matrices obtained in the calculations and interpreted them in terms of the underlying reaction models. An important result of this study is the prediction of narrow energy ranges with extremely small uncertainties for certain reactions (e.g., total and elastic).

  4. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1977-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  5. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1975-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  6. Parametric number covariance in quantum chaotic spectra

    NASA Astrophysics Data System (ADS)

    Vinayak, Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.

  7. Covariant Spectator Theory and Hadron Structure

    NASA Astrophysics Data System (ADS)

    Peña, M. T.; Leitão, Sofia; Biernat, Elmar P.; Stadler, Alfred; Ribeiro, J. E.; Gross, Franz

    2016-06-01

    We present the first results of a study on meson spectroscopy using a covariant formalism based on the Covariant Spectator Theory. Our approach is derived directly in Minkowski space and it approximates the Bethe-Salpeter equation by taking effectively into account the contributions from both ladder and crossed ladder diagrams in the q{bar{q}} interaction kernel. A general Lorentz structure of the kernel is tested and chiral constraints on the kernel are discussed. Results for the pion form factor are also presented.

  8. A violation of the covariant entropy bound?

    NASA Astrophysics Data System (ADS)

    Masoumi, Ali; Mathur, Samir D.

    2015-04-01

    Several arguments suggest that the entropy density at high energy density ρ should be given by the expression s =K √{ρ /G } , where K is a constant of order unity. On the other hand the covariant entropy bound requires that the entropy on a light sheet be bounded by A /4 G , where A is the area of the boundary of the sheet. We find that in a suitably chosen cosmological geometry, the above expression for s violates the covariant entropy bound. We consider different possible explanations for this fact, in particular, the possibility that entropy bounds should be defined in terms of volumes of regions rather than areas of surfaces.

  9. Covariance Analysis of Gamma Ray Spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-01

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  10. Covariance analysis of gamma ray spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-15

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  11. Sparse Multivariate Regression With Covariance Estimation

    PubMed Central

    Rothman, Adam J.; Levina, Elizaveta; Zhu, Ji

    2014-01-01

    We propose a procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for correlation of the response variables. This method, which we call multivariate regression with covariance estimation (MRCE), involves penalized likelihood with simultaneous estimation of the regression coefficients and the covariance structure. An efficient optimization algorithm and a fast approximation are developed for computing MRCE. Using simulation studies, we show that the proposed method outperforms relevant competitors when the responses are highly correlated. We also apply the new method to a finance example on predicting asset returns. An R-package containing this dataset and code for computing MRCE and its approximation are available online. PMID:24963268

  12. Parametric number covariance in quantum chaotic spectra.

    PubMed

    Vinayak; Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated. PMID:27078354

  13. Covariances of Evaluated Nuclear Cross Section Data for (232)Th, (180,182,183,184,186)W and (55)Mn

    SciTech Connect

    Trkov, A.; Capote, R.; Soukhovitskii, E; Leal, Luiz C; Sin, M; Kodeli, I.; Muir, D W

    2011-01-01

    The EMPIRE code system is a versatile package for nuclear model calculations that is often used for nuclear data evaluation. Its capabilities include random sampling of model parameters, which can be utilized to generate a full covariance matrix of all scattering cross sections, including cross-reaction correlations. The EMPIRE system was used to prepare the prior covariance matrices of reaction cross sections of (232)Th, (180,182,183,184,186)W and (55)Mn nuclei for incident neutron energies up to 60 MeV. The obtained modeling prior was fed to the GANDR system, which is a package for a global assessment of nuclear data, based on the Generalized Least-Squares method. By introducing experimental data from the EXFOR database into GANDR, the constrained covariance matrices and cross section adjustment functions were obtained. Applying the correction functions on the cross sections and formatting the covariance matrices, the final evaluations in ENDF-6 format including covariances were derived. In the resonance energy range, separate analyses were performed to determine the resonance parameters with their respective covariances. The data files thus obtained were then subjected to detailed testing and validation. Described evaluations with covariances of (232)Th, (180,182,183,184,186)W and (55)Mn nuclei are included into the ENDF/B-VII.1 library release.

  14. Uncertainty Assessment for Fast Reactors Based on Nuclear Data Adjustment

    NASA Astrophysics Data System (ADS)

    Ivanova, T.; Ivanov, E.; Ecrabet, F.

    2014-04-01

    The paper presents IRSN's results of the OECD/NEA WPEC Subgroup 33 benchmark exercise which is focused upon combined use of differential and integral data using adjustment technique. The results are generated by BERING code using different sets of input data: integral parameters and sensitivity coefficients for fast benchmark experiments and applications computed by deterministic ERANOS code and Monte Carlo SCALE sequences, COMMARA-2.0 and JENDL-4.0 cross-section-covariance data and integral correlations provided by JAEA. The paper demonstrates results of the adjustment when using different input data and two adjustment algorithms implemented in BERING.

  15. Genetic and Environmental Components of Adolescent Adjustment and Parental Behavior: A Multivariate Analysis

    ERIC Educational Resources Information Center

    Loehlin, John C.; Neiderhiser, Jenae M.; Reiss, David

    2005-01-01

    Adolescent adjustment measures may be related to each other and to the social environment in various ways. Are these relationships similar in genetic and environmental sources of covariation, or different? A multivariate behaviorgenetic analysis was made of 6 adjustment and 3 treatment composites from the study Nonshared Environment in Adolescent…

  16. Covariates of Sesame Street Viewing by Preschoolers.

    ERIC Educational Resources Information Center

    Spaner, Steven D.

    A study was made of nine covariates as to their discriminating power between preschoolers who watch Sesame Street regularly and preschoolers who do not watch Sesame Street, Surveyed were 372 3-4 year old children on 9 variables. The nine variables were: race, socioeconomic status, number of siblings, child's birth order, maternal age, maternal…

  17. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  18. Invariance of covariances arises out of noise

    NASA Astrophysics Data System (ADS)

    Grytskyy, D.; Tetzlaff, T.; Diesmann, M.; Helias, M.

    2013-01-01

    Correlated neural activity is a known feature of the brain [1] and evidence increases that it is closely linked to information processing [2]. The temporal shape of covariances has early been related to synaptic interactions and to common input shared by pairs of neurons [3]. Recent theoretical work explains the small magnitude of covariances in inhibition dominated recurrent networks by active decorrelation [4, 5, 6]. For binary neurons the mean-field approach takes random fluctuations into account to accurately predict the average activity in such networks [7] and expressions for covariances follow from a master equation [8], both briefly reviewed here for completeness. In our recent work we have shown how to map different network models, including binary networks, onto linear dynamics [9]. Binary neurons with a strong non-linear Heaviside gain function are inaccessible to the classical treatment [8]. Here we show how random fluctuations generated by the network effectively linearize the system and implement a self-regulating mechanism, that renders population-averaged covariances independent of the interaction strength and keeps the system away from instability.

  19. Covariant Photon Quantization in the SME

    NASA Astrophysics Data System (ADS)

    Colladay, D.

    2014-01-01

    The Gupta-Bleuler quantization procedure is applied to the SME photon sector. A direct application of the method to the massless case fails due to an unavoidable incompleteness in the polarization states. A mass term can be included into the photon lagrangian to rescue the quantization procedure and maintain covariance.

  20. Economical phase-covariant cloning of qudits

    SciTech Connect

    Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara

    2005-04-01

    We derive the optimal N{yields}M phase-covariant quantum cloning for equatorial states in dimension d with M=kd+N, k integer. The cloning maps are optimal for both global and single-qudit fidelity. The map is achieved by an 'economical' cloning machine, which works without ancilla.

  1. Gauge field theory of covariant strings

    NASA Astrophysics Data System (ADS)

    Kaku, Michio

    1986-03-01

    We present a gauge covariant second-quantized field theory of strings which is explicitly invariant under the gauge transformations generated by the Virasoro algebra. Unlike the old field theory strings [1] this new formulation is Lorentz covariant as well as gauge covariant under the continuous group Diff( S1) and its central extension. We derive the free action: L=Φ(X) †P[i∂ τ-(L 0-1)]PΦ(X) , in the same way that Feynman derived the Schrödinger equation from the path integral formalism. The action is manifestly invariant under the gauge transformation δΦ(X)= limit∑n=1∞ɛ -nL -nΦ(X) , where P is a projection operator which annihilates spurious states. We give three distinct formulations of this operator P to all orders, the first based on extracting the operator from the functional formulation of the Nambu-Goto action, and the second and third based on inverting the Shapovalov matrix on a Verma module. This gauge covariant formulation can be easily extended to the Green-Schwarz superstring [2,3]. One element application of these methods is to re-express the old Neveu-Schwarz-Ramond model as a field theory which is manifestly invariant under space-time supersymmetric transformations.

  2. Nuclear moments in covariant density functional theory

    NASA Astrophysics Data System (ADS)

    Meng, J.; Zhao, P. W.; Zhang, S. Q.; Hu, J. N.; Li, J.

    2014-05-01

    Recent progresses on microscopic and self-consistent description of the nuclear moments in covariant density functional theory based on a point-coupling interaction are briefly reviewed. In particular, the electric quadrupole moments of Cd isotopes and the magnetic moments of Pb isotopes are discussed.

  3. Hawking fluxes, back reaction and covariant anomalies

    NASA Astrophysics Data System (ADS)

    Kulkarni, Shailesh

    2008-11-01

    Starting from the chiral covariant effective action approach of Banerjee and Kulkarni (2008 Phys. Lett. B 659 827), we provide a derivation of the Hawking radiation from a charged black hole in the presence of gravitational back reaction. The modified expressions for charge and energy flux, due to the effect of one-loop back reaction are obtained.

  4. A Covariance NMR Toolbox for MATLAB and OCTAVE

    NASA Astrophysics Data System (ADS)

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.

  5. A covariance NMR toolbox for MATLAB and OCTAVE.

    PubMed

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. PMID:21215669

  6. Population pharmacokinetics of docetaxel during phase I studies using nonlinear mixed-effect modeling and nonparametric maximum-likelihood estimation.

    PubMed

    Launay-Iliadis, M C; Bruno, R; Cosson, V; Vergniol, J C; Oulid-Aissa, D; Marty, M; Clavel, M; Aapro, M; Le Bail, N; Iliadis, A

    1995-01-01

    Docetaxel, a novel anticancer agent, was given to 26 patients by short i.v. infusion (1-2 h) at various dose levels (70-115 mg/m2, the maximum tolerated dose) during 2 phase I studies. Two population analyses, one using NONMEM (nonlinear mixed-effect modeling) and the other using NPML (nonparametric maximum-likelihood), were performed sequentially to determine the structural model; estimate the mean population parameters, including clearance (Cl) and interindividual variability; and find influences of demographic covariates on them. Nine covariates were included in the analyses: age, height, weight, body surface area, sex, performance status, presence of liver metastasis, dose level, and type of formulation. A three-compartment model gave the best fit to the data, and the final NONMEM regression model for Cl was Cl = BSA(Theta1 + Theta02 x AGE), expressing Cl (in liters per hour) directly as a function of body surface area. Only these two covariates were considered in the NPML analysis to confirm the results found by NONMEM. Using NONMEM [for a patient with mean AGE (52.3 years) and mean BSA (1.68 m2)] and NPML, docetaxel Cl was estimated to be 35.6 l/h (21.2 lh-1 m-2) and 37.2 l/h with interpatient coefficients of variations (CVs) of 17.4% and 24.8%, respectively. The intraindividual CV was estimated at 23.8% by NONMEM; the corresponding variability was fixed in NPML in an additive Gaussian variance error model with a 20% CV. Discrepancies were found in the mean volume at steady state (Vss; 83.21 for NPML versus 1241 for NONMEM) and in terminal half-lives, notably the mean t1/2 gamma, which was shorter as determined by NPML (7.89 versus 12.2 h), although the interindividual CV was 89.1% and 62.7% for Vss and t1/2 gamma, respectively. However, the NPML-estimated probability density function (pdf) of t1/2 gamma was bimodal (5 and 11.4 h), probably due to the imbalance of the data. Both analyses suggest a similar magnitude of mean Cl decrease with small BSA and

  7. Nonparametric directionality measures for time series and point process data.

    PubMed

    Halliday, David M

    2015-06-01

    The need to determine the directionality of interactions between neural signals is a key requirement for analysis of multichannel recordings. Approaches most commonly used are parametric, typically relying on autoregressive models. A number of concerns have been expressed regarding parametric approaches, thus there is a need to consider alternatives. We present an alternative nonparametric approach for construction of directionality measures for bivariate random processes. The method combines time and frequency domain representations of bivariate data to decompose the correlation by direction. Our framework generates two sets of complementary measures, a set of scalar measures, which decompose the total product moment correlation coefficient summatively into three terms by direction and a set of functions which decompose the coherence summatively at each frequency into three terms by direction: forward direction, reverse direction and instantaneous interaction. It can be undertaken as an addition to a standard bivariate spectral and coherence analysis, and applied to either time series or point-process (spike train) data or mixtures of the two (hybrid data). In this paper, we demonstrate application to spike train data using simulated cortical neurone networks and application to experimental data from isolated muscle spindle sensory endings subject to random efferent stimulation. PMID:25958923

  8. Nonparametric classification of subpixel materials in multispectral imagery

    NASA Astrophysics Data System (ADS)

    Boudreau, Eric R.; Huguenin, Robert L.; Karaska, Mark A.

    1996-06-01

    An effective process for the automatic classification of subpixel materials in multispectral imagery has been developed. The applied analysis spectral analytical process (AASAP) isolates the contribution of specific materials of interest (MOI) within mixed pixels. AASAP consists of a suite of algorithms that perform environmental correction, signature derivation, and subpixel classification. Atmospheric and sun angle correction factors are extracted directly from imagery, allowing signatures produced from a given image to be applied to other images. AASAP signature derivation extracts a component of the pixel spectra that is most common to the training set to produce a signature spectrum and nonparametric feature space. The subpixel classifier applies a background estimation technique to a given pixel under test to produce a residual. A detection occurs when the residual falls within the signature feature space. AASAP was employed to detect stands of Loblolly Pine in a landsat TM scene that contained a variety of species of southern yellow pine. An independent field evaluation indicated that 85% of the detections contained over 20% Loblolly, and that 91% of the known Loblolly stands were detected. For another application, a crop signature derived from a scene in Texas detected occurrences of the same crop in scenes from Kansas and Mexico. AASAP has also been used to locate subpixel occurrences of soil contamination, wetlands species, and lines of communications.

  9. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2015-04-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

  10. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  11. Bayesian Nonparametric Inference of Population Size Changes from Sequential Genealogies.

    PubMed

    Palacios, Julia A; Wakeley, John; Ramachandran, Sohini

    2015-09-01

    Sophisticated inferential tools coupled with the coalescent model have recently emerged for estimating past population sizes from genomic data. Recent methods that model recombination require small sample sizes, make constraining assumptions about population size changes, and do not report measures of uncertainty for estimates. Here, we develop a Gaussian process-based Bayesian nonparametric method coupled with a sequentially Markov coalescent model that allows accurate inference of population sizes over time from a set of genealogies. In contrast to current methods, our approach considers a broad class of recombination events, including those that do not change local genealogies. We show that our method outperforms recent likelihood-based methods that rely on discretization of the parameter space. We illustrate the application of our method to multiple demographic histories, including population bottlenecks and exponential growth. In simulation, our Bayesian approach produces point estimates four times more accurate than maximum-likelihood estimation (based on the sum of absolute differences between the truth and the estimated values). Further, our method's credible intervals for population size as a function of time cover 90% of true values across multiple demographic scenarios, enabling formal hypothesis testing about population size differences over time. Using genealogies estimated with ARGweaver, we apply our method to European and Yoruban samples from the 1000 Genomes Project and confirm key known aspects of population size history over the past 150,000 years. PMID:26224734

  12. Nonparametric estimation of quantum states, processes and measurements

    NASA Astrophysics Data System (ADS)

    Lougovski, Pavel; Bennink, Ryan

    Quantum state, process, and measurement estimation methods traditionally use parametric models, in which the number and role of relevant parameters is assumed to be known. When such an assumption cannot be justified, a common approach in many disciplines is to fit the experimental data to multiple models with different sets of parameters and utilize an information criterion to select the best fitting model. However, it is not always possible to assume a model with a finite (countable) number of parameters. This typically happens when there are unobserved variables that stem from hidden correlations that can only be unveiled after collecting experimental data. How does one perform quantum characterization in this situation? We present a novel nonparametric method of experimental quantum system characterization based on the Dirichlet Process (DP) that addresses this problem. Using DP as a prior in conjunction with Bayesian estimation methods allows us to increase model complexity (number of parameters) adaptively as the number of experimental observations grows. We illustrate our approach for the one-qubit case and show how a probability density function for an unknown quantum process can be estimated.

  13. Non-parametric reconstruction of cosmological matter perturbations

    NASA Astrophysics Data System (ADS)

    González, J. E.; Alcaniz, J. S.; Carvalho, J. C.

    2016-04-01

    Perturbative quantities, such as the growth rate (f) and index (γ), are powerful tools to distinguish different dark energy models or modified gravity theories even if they produce the same cosmic expansion history. In this work, without any assumption about the dynamics of the Universe, we apply a non-parametric method to current measurements of the expansion rate H(z) from cosmic chronometers and high-z quasar data and reconstruct the growth factor and rate of linearised density perturbations in the non-relativistic matter component. Assuming realistic values for the matter density parameter Ωm0, as provided by current CMB experiments, we also reconstruct the evolution of the growth index γ with redshift. We show that the reconstruction of current H(z) data constrains the growth index to γ=0.56 ± 0.12 (2σ) at z = 0.09, which is in full agreement with the prediction of the ΛCDM model and some of its extensions.

  14. Iranian rainfall series analysis by means of nonparametric tests

    NASA Astrophysics Data System (ADS)

    Talaee, P. Hosseinzadeh

    2014-05-01

    The study of the trends and fluctuations in rainfall has received a great deal of attention, since changes in rainfall patterns may lead to floods or droughts. The objective of this study was to analyze the annual, seasonal, and monthly rainfall time series at seven rain gauge stations in the west of Iran for a 40-year period (from October 1969 to September 2009). The homogeneity of the rainfall data sets at the rain gauge stations was checked by using the cumulative deviations test. Three nonparametric tests, namely Kendall, Spearman, and Mann-Kendall, at the 95 % confidence level were used for the trend analysis and the Theil-Sen estimator was applied for determining the magnitudes of the trends. According to the homogeneity analysis, all of the rainfall series except the September series at Vasaj station were found to be homogeneous. The obtained results showed an insignificant trend in the annual and seasonal rainfall series at the majority of the considered stations. Moreover, only three significant trends were observed at the February rainfall of Aghajanbolaghi station, the November series of Vasaj station, and the March rainfall series of Khomigan station. The findings of this study on the temporal trends of rainfall can be implemented to improve the water resources strategies in the study region.

  15. Transition redshift: new constraints from parametric and nonparametric methods

    NASA Astrophysics Data System (ADS)

    Rani, Nisha; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha; Pires, Nilza

    2015-12-01

    In this paper, we use the cosmokinematics approach to study the accelerated expansion of the Universe. This is a model independent approach and depends only on the assumption that the Universe is homogeneous and isotropic and is described by the FRW metric. We parametrize the deceleration parameter, q(z), to constrain the transition redshift (zt) at which the expansion of the Universe goes from a decelerating to an accelerating phase. We use three different parametrizations of q(z) namely, qI(z)=q1+q2z, qII (z) = q3 + q4 ln (1 + z) and qIII (z)=½+q5/(1+z)2. A joint analysis of the age of galaxies, strong lensing and supernovae Ia data indicates that the transition redshift is less than unity i.e. zt < 1. We also use a nonparametric approach (LOESS+SIMEX) to constrain zt. This too gives zt < 1 which is consistent with the value obtained by the parametric approach.

  16. Nonparametric illumination correction for scanned document images via convex hulls.

    PubMed

    Meng, Gaofeng; Xiang, Shiming; Zheng, Nanning; Pan, Chunhong

    2013-07-01

    A scanned image of an opened book page often suffers from various scanning artifacts known as scanning shading and dark borders noises. These artifacts will degrade the qualities of the scanned images and cause many problems to the subsequent process of document image analysis. In this paper, we propose an effective method to rectify these scanning artifacts. Our method comes from two observations: that the shading surface of most scanned book pages is quasi-concave and that the document contents are usually printed on a sheet of plain and bright paper. Based on these observations, a shading image can be accurately extracted via convex hulls-based image reconstruction. The proposed method proves to be surprisingly effective for image shading correction and dark borders removal. It can restore a desired shading-free image and meanwhile yield an illumination surface of high quality. More importantly, the proposed method is nonparametric and thus does not involve any user interactions or parameter fine-tuning. This would make it very appealing to nonexpert users in applications. Extensive experiments based on synthetic and real-scanned document images demonstrate the efficiency of the proposed method. PMID:23681999

  17. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  18. Nonparametric Spatial Models for Extremes: Application to Extreme Temperature Data.

    PubMed

    Fuentes, Montserrat; Henry, John; Reich, Brian

    2013-03-01

    Estimating the probability of extreme temperature events is difficult because of limited records across time and the need to extrapolate the distributions of these events, as opposed to just the mean, to locations where observations are not available. Another related issue is the need to characterize the uncertainty in the estimated probability of extreme events at different locations. Although the tools for statistical modeling of univariate extremes are well-developed, extending these tools to model spatial extreme data is an active area of research. In this paper, in order to make inference about spatial extreme events, we introduce a new nonparametric model for extremes. We present a Dirichlet-based copula model that is a flexible alternative to parametric copula models such as the normal and t-copula. The proposed modelling approach is fitted using a Bayesian framework that allow us to take into account different sources of uncertainty in the data and models. We apply our methods to annual maximum temperature values in the east-south-central United States. PMID:24058280

  19. Nonparametric Bayes Classification and Hypothesis Testing on Manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David

    2012-01-01

    Our first focus is prediction of a categorical response variable using features that lie on a general manifold. For example, the manifold may correspond to the surface of a hypersphere. We propose a general kernel mixture model for the joint distribution of the response and predictors, with the kernel expressed in product form and dependence induced through the unknown mixing measure. We provide simple sufficient conditions for large support and weak and strong posterior consistency in estimating both the joint distribution of the response and predictors and the conditional distribution of the response. Focusing on a Dirichlet process prior for the mixing measure, these conditions hold using von Mises-Fisher kernels when the manifold is the unit hypersphere. In this case, Bayesian methods are developed for efficient posterior computation using slice sampling. Next we develop Bayesian nonparametric methods for testing whether there is a difference in distributions between groups of observations on the manifold having unknown densities. We prove consistency of the Bayes factor and develop efficient computational methods for its calculation. The proposed classification and testing methods are evaluated using simulation examples and applied to spherical data applications. PMID:22754028

  20. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians. PMID:19041946

  1. Non-parametric extraction of implied asset price distributions

    NASA Astrophysics Data System (ADS)

    Healy, Jerome V.; Dixon, Maurice; Read, Brian J.; Cai, Fang Fang

    2007-08-01

    We present a fully non-parametric method for extracting risk neutral densities (RNDs) from observed option prices. The aim is to obtain a continuous, smooth, monotonic, and convex pricing function that is twice differentiable. Thus, irregularities such as negative probabilities that afflict many existing RND estimation techniques are reduced. Our method employs neural networks to obtain a smoothed pricing function, and a central finite difference approximation to the second derivative to extract the required gradients. This novel technique was successfully applied to a large set of FTSE 100 daily European exercise (ESX) put options data and as an Ansatz to the corresponding set of American exercise (SEI) put options. The results of paired t-tests showed significant differences between RNDs extracted from ESX and SEI option data, reflecting the distorting impact of early exercise possibility for the latter. In particular, the results for skewness and kurtosis suggested different shapes for the RNDs implied by the two types of put options. However, both ESX and SEI data gave an unbiased estimate of the realised FTSE 100 closing prices on the options’ expiration date. We confirmed that estimates of volatility from the RNDs of both types of option were biased estimates of the realised volatility at expiration, but less so than the LIFFE tabulated at-the-money implied volatility.

  2. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  3. Nonparametric statistical tests for the continuous data: the basic concept and the practical use.

    PubMed

    Nahm, Francis Sahngun

    2016-02-01

    Conventional statistical tests are usually called parametric tests. Parametric tests are used more frequently than nonparametric tests in many medical articles, because most of the medical researchers are familiar with and the statistical software packages strongly support parametric tests. Parametric tests require important assumption; assumption of normality which means that distribution of sample means is normally distributed. However, parametric test can be misleading when this assumption is not satisfied. In this circumstance, nonparametric tests are the alternative methods available, because they do not required the normality assumption. Nonparametric tests are the statistical methods based on signs and ranks. In this article, we will discuss about the basic concepts and practical use of nonparametric tests for the guide to the proper use. PMID:26885295

  4. Nonparametric statistical tests for the continuous data: the basic concept and the practical use

    PubMed Central

    2016-01-01

    Conventional statistical tests are usually called parametric tests. Parametric tests are used more frequently than nonparametric tests in many medical articles, because most of the medical researchers are familiar with and the statistical software packages strongly support parametric tests. Parametric tests require important assumption; assumption of normality which means that distribution of sample means is normally distributed. However, parametric test can be misleading when this assumption is not satisfied. In this circumstance, nonparametric tests are the alternative methods available, because they do not required the normality assumption. Nonparametric tests are the statistical methods based on signs and ranks. In this article, we will discuss about the basic concepts and practical use of nonparametric tests for the guide to the proper use. PMID:26885295

  5. ADJUSTABLE DOUBLE PULSE GENERATOR

    DOEpatents

    Gratian, J.W.; Gratian, A.C.

    1961-08-01

    >A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)

  6. Adjustable sutures in children.

    PubMed

    Engel, J Mark; Guyton, David L; Hunter, David G

    2014-06-01

    Although adjustable sutures are considered a standard technique in adult strabismus surgery, most surgeons are hesitant to attempt the technique in children, who are believed to be unlikely to cooperate for postoperative assessment and adjustment. Interest in using adjustable sutures in pediatric patients has increased with the development of surgical techniques specific to infants and children. This workshop briefly reviews the literature supporting the use of adjustable sutures in children and presents the approaches currently used by three experienced strabismus surgeons. PMID:24924284

  7. Nonparametric analysis of Minnesota spruce and aspen tree data and LANDSAT data

    NASA Technical Reports Server (NTRS)

    Scott, D. W.; Jee, R.

    1984-01-01

    The application of nonparametric methods in data-intensive problems faced by NASA is described. The theoretical development of efficient multivariate density estimators and the novel use of color graphics workstations are reviewed. The use of nonparametric density estimates for data representation and for Bayesian classification are described and illustrated. Progress in building a data analysis system in a workstation environment is reviewed and preliminary runs presented.

  8. Construction of Covariance Functions with Variable Length Fields

    NASA Technical Reports Server (NTRS)

    Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven

    2005-01-01

    This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.

  9. Linear covariance analysis for gimbaled pointing systems

    NASA Astrophysics Data System (ADS)

    Christensen, Randall S.

    Linear covariance analysis has been utilized in a wide variety of applications. Historically, the theory has made significant contributions to navigation system design and analysis. More recently, the theory has been extended to capture the combined effect of navigation errors and closed-loop control on the performance of the system. These advancements have made possible rapid analysis and comprehensive trade studies of complicated systems ranging from autonomous rendezvous to vehicle ascent trajectory analysis. Comprehensive trade studies are also needed in the area of gimbaled pointing systems where the information needs are different from previous applications. It is therefore the objective of this research to extend the capabilities of linear covariance theory to analyze the closed-loop navigation and control of a gimbaled pointing system. The extensions developed in this research include modifying the linear covariance equations to accommodate a wider variety of controllers. This enables the analysis of controllers common to gimbaled pointing systems, with internal states and associated dynamics as well as actuator command filtering and auxiliary controller measurements. The second extension is the extraction of power spectral density estimates from information available in linear covariance analysis. This information is especially important to gimbaled pointing systems where not just the variance but also the spectrum of the pointing error impacts the performance. The extended theory is applied to a model of a gimbaled pointing system which includes both flexible and rigid body elements as well as input disturbances, sensor errors, and actuator errors. The results of the analysis are validated by direct comparison to a Monte Carlo-based analysis approach. Once the developed linear covariance theory is validated, analysis techniques that are often prohibitory with Monte Carlo analysis are used to gain further insight into the system. These include the creation

  10. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  11. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  12. A robust nonparametric method for quantifying undetected extinctions.

    PubMed

    Chisholm, Ryan A; Giam, Xingli; Sadanandan, Keren R; Fung, Tak; Rheindt, Frank E

    2016-06-01

    How many species have gone extinct in modern times before being described by science? To answer this question, and thereby get a full assessment of humanity's impact on biodiversity, statistical methods that quantify undetected extinctions are required. Such methods have been developed recently, but they are limited by their reliance on parametric assumptions; specifically, they assume the pools of extant and undetected species decay exponentially, whereas real detection rates vary temporally with survey effort and real extinction rates vary with the waxing and waning of threatening processes. We devised a new, nonparametric method for estimating undetected extinctions. As inputs, the method requires only the first and last date at which each species in an ensemble was recorded. As outputs, the method provides estimates of the proportion of species that have gone extinct, detected, or undetected and, in the special case where the number of undetected extant species in the present day is assumed close to zero, of the absolute number of undetected extinct species. The main assumption of the method is that the per-species extinction rate is independent of whether a species has been detected or not. We applied the method to the resident native bird fauna of Singapore. Of 195 recorded species, 58 (29.7%) have gone extinct in the last 200 years. Our method projected that an additional 9.6 species (95% CI 3.4, 19.8) have gone extinct without first being recorded, implying a true extinction rate of 33.0% (95% CI 31.0%, 36.2%). We provide R code for implementing our method. Because our method does not depend on strong assumptions, we expect it to be broadly useful for quantifying undetected extinctions. PMID:27153528

  13. Non-parametric combination and related permutation tests for neuroimaging.

    PubMed

    Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E

    2016-04-01

    In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. Hum Brain Mapp 37:1486-1511, 2016. © 2016 Wiley Periodicals, Inc. PMID:26848101

  14. Covariance and the hierarchy of frame bundles

    NASA Technical Reports Server (NTRS)

    Estabrook, Frank B.

    1987-01-01

    This is an essay on the general concept of covariance, and its connection with the structure of the nested set of higher frame bundles over a differentiable manifold. Examples of covariant geometric objects include not only linear tensor fields, densities and forms, but affinity fields, sectors and sector forms, higher order frame fields, etc., often having nonlinear transformation rules and Lie derivatives. The intrinsic, or invariant, sets of forms that arise on frame bundles satisfy the graded Cartan-Maurer structure equations of an infinite Lie algebra. Reduction of these gives invariant structure equations for Lie pseudogroups, and for G-structures of various orders. Some new results are introduced for prolongation of structure equations, and for treatment of Riemannian geometry with higher-order moving frames. The use of invariant form equations for nonlinear field physics is implicitly advocated.

  15. On covariance structure in noisy, big data

    NASA Astrophysics Data System (ADS)

    Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.

    2013-09-01

    Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.

  16. Covariance Spectroscopy Applied to Nuclear Radiation Detection

    SciTech Connect

    Trainham, R., Tinsley, J., Keegan, R., Quam, W.

    2011-09-01

    Covariance spectroscopy is a method of processing second order moments of data to obtain information that is usually absent from average spectra. In nuclear radiation detection it represents a generalization of nuclear coincidence techniques. Correlations and fluctuations in data encode valuable information about radiation sources, transport media, and detection systems. Gaining access to the extra information can help to untangle complicated spectra, uncover overlapping peaks, accelerate source identification, and even sense directionality. Correlations existing at the source level are particularly valuable since many radioactive isotopes emit correlated gammas and neutrons. Correlations also arise from interactions within detector systems, and from scattering in the environment. In particular, correlations from Compton scattering and pair production within a detector array can be usefully exploited in scenarios where direct measurement of source correlations would be unfeasible. We present a covariance analysis of a few experimental data sets to illustrate the utility of the concept.

  17. RNA sequence analysis using covariance models.

    PubMed Central

    Eddy, S R; Durbin, R

    1994-01-01

    We describe a general approach to several RNA sequence analysis problems using probabilistic models that flexibly describe the secondary structure and primary sequence consensus of an RNA sequence family. We call these models 'covariance models'. A covariance model of tRNA sequences is an extremely sensitive and discriminative tool for searching for additional tRNAs and tRNA-related sequences in sequence databases. A model can be built automatically from an existing sequence alignment. We also describe an algorithm for learning a model and hence a consensus secondary structure from initially unaligned example sequences and no prior structural information. Models trained on unaligned tRNA examples correctly predict tRNA secondary structure and produce high-quality multiple alignments. The approach may be applied to any family of small RNA sequences. Images PMID:8029015

  18. Covariant quantum mechanics applied to noncommutative geometry

    NASA Astrophysics Data System (ADS)

    Astuti, Valerio

    2015-08-01

    We here report a result obtained in collaboration with Giovanni Amelino-Camelia, first shown in the paper [1]. Applying the manifestly covariant formalism of quantum mechanics to the much studied Snyder spacetime [2] we show how it is trivial in every physical observables, this meaning that every measure in this spacetime gives the same results that would be obtained in the flat Minkowski spacetime.

  19. Economical phase-covariant cloning with multiclones

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Hai; Ye, Liu

    2009-09-01

    This paper presents a very simple method to derive the explicit transformations of the optimal economical 1 to M phase-covariant cloning. The fidelity of clones reaches the theoretic bound [D'Ariano G M and Macchiavello C 2003 Phys. Rev. A 67 042306]. The derived transformations cover the previous contributions [Delgado Y, Lamata L et al., 2007 Phys. Rev. Lett. 98 150502] in which M must be odd.

  20. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  1. Partial covariance mapping techniques at FELs

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek

    2014-05-01

    The development of free-electron lasers (FELs) is driven by the desire to access the structure and chemical dynamics of biomolecules with atomic resolution. Short, intense FEL pulses have the potential to record x-ray diffraction images before the molecular structure is destroyed by radiation damage. However, even during the shortest, few-femtosecond pulses currently available, there are some significant changes induced by massive ionisation and onset of Coulomb explosion. To interpret the diffraction images it is vital to gain insight into the electronic and nuclear dynamics during multiple core and valence ionisations that compete with Auger cascades. This paper focuses on a technique that is capable to probe these processes. The covariance mapping technique is well suited to the high intensity and low repetition rate of FEL pulses. While the multitude of charges ejected at each pulse overwhelm conventional coincidence methods, an improved technique of partial covariance mapping can cope with hundreds of photoelectrons or photoions detected at each FEL shot. The technique, however, often reveals spurious, uninteresting correlations that spoil the maps. This work will discuss the strengths and limitations of various forms of covariance mapping techniques. Quantitative information extracted from the maps will be linked to theoretical modelling of ionisation and fragmentation paths. Special attention will be given to critical experimental parameters, such as counting rate, FEL intensity fluctuations, vacuum impurities or detector efficiency and nonlinearities. Methods of assessing and optimising signal-to-noise ratio will be described. Emphasis will be put on possible future developments such as multidimensional covariance mapping, compensation for various experimental instabilities and improvements in the detector response. This work has been supported the EPSRC, UK (grants EP/F021232/1 and EP/I032517/1).

  2. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  3. Covariance tracking: architecture optimizations for embedded systems

    NASA Astrophysics Data System (ADS)

    Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan

    2014-12-01

    Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.

  4. Shrinkage covariance matrix approach for microarray data

    NASA Astrophysics Data System (ADS)

    Karjanto, Suryaefiza; Aripin, Rasimah

    2013-04-01

    Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n < p. This leads to a biased estimate of the covariance matrix. In this study, the Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.

  5. Development of covariance capabilities in EMPIRE code

    SciTech Connect

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  6. Development of Covariance Capabilities in EMPIRE Code

    SciTech Connect

    Herman, M. Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-12-15

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  7. Influence of neglected covariances on the estimation of Earth rotation parameters, geophysical excitation functions and second degree gravity field coefficients

    NASA Astrophysics Data System (ADS)

    Heiker, Andrea; Kutterer, Hansjörg

    2010-05-01

    The Earth rotation variability is redundantly described by the combination of Earth rotation parameters (polar motion and length of day), geophysical excitation functions and second degree gravity field coefficients. There exist some publications regarding the comparison of the Earth rotation parameters and excitation functions. However, most authors do not make use of the redundancy. In addition, existing covariances between the input parameters are not considered. As shown in previous publications we use the redundancy for the independent mutual validation of the Earth rotation parameters, excitation functions and second degree gravity field coefficients based on an extended Gauss-Markov model and least-squares adjustment. The work regarding the mutual validation is performed within the project P9 "Combined analysis and validation of Earth rotation models and observations" of the research Unit FOR 584 ("Earth rotation and global dynamic processes") which is funded by the German Research Unit (DFG); see also abstract "Combined Analysis and Validation of Earth Rotation Models and Observations". The adjustment model is determined at first by the joint functional relations between the parameters and second by the stochastic model of the input data. A variance-covariance component estimation is included in the adjustment model. The functional model is based on the linearized Euler-Liouville equation. The construction of an appropriate stochastic model is prevented in practice by insufficient knowledge on variances and covariances. However, some numerical results derived from arbitrarily chosen stochastic models indicate that the stochastic model may be crucial for a correct estimation. The missing information is approximated by analyzing the input data. Synthetic variance-covariance matrices are constructed by considering empirical auto- and cross-correlation functions. The influence of neglected covariances is quantified and discussed by comparing the results derived

  8. Estimation of Covariances on Prompt Fission Neutron Spectra and Impact of the PFNS Model on the Vessel Fluence

    NASA Astrophysics Data System (ADS)

    Berge, Léonie; Litaize, Olivier; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Pénéliau, Yannick; Regnier, David

    2016-02-01

    As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS) are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.

  9. ANALYSIS OF COVARIANCE WITH SPATIALLY CORRELATED SECONDARY VARIABLES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data sets which contain measurements on a spatially referenced response and covariate are analyzed using either co-kriging or spatial analysis of covariance. While co-kriging accounts for the correlation structure of the covariate, it is purely a predictive tool. Alternatively, spatial analysis of c...

  10. Covariate Selection in Propensity Scores Using Outcome Proxies

    ERIC Educational Resources Information Center

    Kelcey, Ben

    2011-01-01

    This study examined the practical problem of covariate selection in propensity scores (PSs) given a predetermined set of covariates. Because the bias reduction capacity of a confounding covariate is proportional to the concurrent relationships it has with the outcome and treatment, particular focus is set on how we might approximate…

  11. Eliciting Systematic Rule Use in Covariation Judgment [the Early Years].

    ERIC Educational Resources Information Center

    Shaklee, Harriet; Paszek, Donald

    Related research suggests that children may show some simple understanding of event covariations by the early elementary school years. The present experiments use a rule analysis methodology to investigate covariation judgments of children in this age range. In Experiment 1, children in second, third and fourth grade judged covariations on 12…

  12. Covariant Perturbation Expansion of Off-Diagonal Heat Kernel

    NASA Astrophysics Data System (ADS)

    Gou, Yu-Zi; Li, Wen-Du; Zhang, Ping; Dai, Wu-Sheng

    2016-07-01

    Covariant perturbation expansion is an important method in quantum field theory. In this paper an expansion up to arbitrary order for off-diagonal heat kernels in flat space based on the covariant perturbation expansion is given. In literature, only diagonal heat kernels are calculated based on the covariant perturbation expansion.

  13. Earth Observation System Flight Dynamics System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  14. Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments

    ERIC Educational Resources Information Center

    Barker, Lynne A.; Andrade, Jackie

    2006-01-01

    In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…

  15. Covariant balance laws in continua with microstructure

    NASA Astrophysics Data System (ADS)

    Yavari, Arash; Marsden, Jerrold E.

    2009-02-01

    The purpose of this paper is to extend the Green-Naghdi-Rivlin balance of energy method to continua with microstructure. The key idea is to replace the group of Galilean transformations with the group of diffeomorphisms of the ambient space. A key advantage is that one obtains in a natural way all the needed balance laws on both the macro and micro levels along with two Doyle-Erickson formulas. We model a structured continuum as a triplet of Riemannian manifolds: a material manifold, the ambient space manifold of material particles and a director field manifold. The Green-Naghdi-Rivlin theorem and its extensions for structured continua are critically reviewed. We show that when the ambient space is Euclidean and when the microstructure manifold is the tangent space of the ambient space manifold, postulating a single balance of energy law and its invariance under time-dependent isometries of the ambient space, one obtains conservation of mass, balances of linear and angular momenta but not a separate balance of linear momentum. We develop a covariant elasticity theory for structured continua by postulating that energy balance is invariant under time-dependent spatial diffeomorphisms of the ambient space, which in this case is the product of two Riemannian manifolds. We then introduce two types of constrained continua in which microstructure manifold is linked to the reference and ambient space manifolds. In the case when at every material point, the microstructure manifold is the tangent space of the ambient space manifold at the image of the material point, we show that the assumption of covariance leads to balances of linear and angular momenta with contributions from both forces and micro-forces along with two Doyle-Ericksen formulas. We show that generalized covariance leads to two balances of linear momentum and a single coupled balance of angular momentum. Using this theory, we covariantly obtain the balance laws for two specific examples, namely elastic

  16. Jointly Modeling Event Time and Skewed-Longitudinal Data with Missing Response and Mismeasured Covariate for AIDS Studies.

    PubMed

    Huang, Yangxin; Yan, Chunning; Xing, Dongyuan; Zhang, Nanhua; Chen, Henian

    2015-01-01

    In longitudinal studies it is often of interest to investigate how a repeatedly measured marker in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time-to-event data. Normality of model errors in longitudinal model is a routine assumption, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain between- and within-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. Moreover, the responses may encounter nonignorable missing. Statistical analysis may be complicated dramatically based on longitudinal-survival joint models where longitudinal data with skewness, missing values, and measurement errors are observed. In this article, we relax the distributional assumptions for the longitudinal models using skewed (parametric) distribution and unspecified (nonparametric) distribution placed by a Dirichlet process prior, and address the simultaneous influence of skewness, missingness, covariate measurement error, and time-to-event process by jointly modeling three components (response process with missing values, covariate process with measurement errors, and time-to-event process) linked through the random-effects that characterize the underlying individual-specific longitudinal processes in Bayesian analysis. The method is illustrated with an AIDS study by jointly modeling HIV/CD4 dynamics and time to viral rebound in comparison with potential models with various scenarios and different distributional specifications. PMID:24905593

  17. Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2011-01-01

    This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…

  18. USE OF HISTORICAL CONTROLS TO ADJUST FOR COVARIATES IN TREND TESTS FOR BINARY DATA. (R824757)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  19. Carbon Balance in the Amazon Basin: Factors Influencing the Accuracy of Eddy Covariance Measurements

    NASA Astrophysics Data System (ADS)

    Saleska, S. R.; Wofsy, S. C.; Daube, B. C.; Munger, J. W.; Kirchhoff, V. W.

    2001-12-01

    To assess the role of Amazonia as a source or a sink for atmospheric CO2, we are using ground-based biometry measurements together with whole-system CO2 fluxes (via eddy covariance) to explore the ecological and climatic controls on the carbon balance at an old-growth Amazon forest (Tapajos National Forest, Santarem, Para, Brazil). Here, we focus on the initial 6 months of eddy covariance results from a new system that measures eddy covariance at two levels (47 m and 58 m) and measures canopy CO2 storage in two ways: by cycling through tower-mounted sample inlets at 8 levels (over 16 minutes), and by making a single integrated measurement of all 8 levels simultaneously. The integrated canopy storage measurement gives a storage flux estimate with much lower noise than the more conventional method of interpolating through separate measurements staggered in time, allowing for improved estimates of net ecosystem exchange (eddy flux + storage flux). This provides a cleaner way to identify nighttime periods when turbulent mixing was weak. We show that such periods of weak mixing clearly cause underestimates in flux, but that a continuous low-noise storage flux measurement can make adjustments for the effects of such periods more robust. After adjusting for periods of weak mixing, this tropical forest site appears to be in approximate carbon balance during this 6-month period -- in agreement with the biometry observations over the first two years that woody biomass increment is in approximate balance with tree mortality.

  20. Lasso adjustments of treatment effect estimates in randomized experiments

    PubMed Central

    Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S.; Yu, Bin

    2016-01-01

    We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman–Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS. PMID:27382153

  1. Lasso adjustments of treatment effect estimates in randomized experiments.

    PubMed

    Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S; Yu, Bin

    2016-07-01

    We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS. PMID:27382153

  2. An adaptive distance measure for use with nonparametric models

    SciTech Connect

    Garvey, D. R.; Hines, J. W.

    2006-07-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  3. Quantum energy inequalities and local covariance II: categorical formulation

    NASA Astrophysics Data System (ADS)

    Fewster, Christopher J.

    2007-11-01

    We formulate quantum energy inequalities (QEIs) in the framework of locally covariant quantum field theory developed by Brunetti, Fredenhagen and Verch, which is based on notions taken from category theory. This leads to a new viewpoint on the QEIs, and also to the identification of a new structural property of locally covariant quantum field theory, which we call local physical equivalence. Covariant formulations of the numerical range and spectrum of locally covariant fields are given and investigated, and a new algebra of fields is identified, in which fields are treated independently of their realisation on particular spacetimes and manifestly covariant versions of the functional calculus may be formulated.

  4. EVALUATION OF TUNGSTEN ISOTOPES IN THE FAST NEUTRON RANGE INCLUDING CROSS-SECTION COVARIANCE ESTIMATION.

    SciTech Connect

    CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.

    2007-04-22

    New evaluations for the tungsten isotopes {sup 180,182,183,184,186}W in the neutron energy range up to 60 MeV were produced. In the resonance range only minor adjustments to the resonance parameters were made due to a lack of adequate experimental data. Evaluations in the fast energy region were based on nuclear model calculations using the EMPIRE-2.19 code. Recently derived dispersive coupled-channel optical model potentials for W and Ta isotopes were instrumental to achieve a very good description of the available microscopic cross-section database. Model covariance data were generated with the Monte Carlo technique to produce a prior estimate for the covariance matrix. Experimental data were introduced through the GANDR system. The evaluated files were tested on selected fusion neutronics benchmarks and showed marked improvement compared to other existing evaluations.

  5. Adjusting the Chain Gear

    NASA Astrophysics Data System (ADS)

    Koloc, Z.; Korf, J.; Kavan, P.

    The adjustment (modification) deals with gear chains intermediating (transmitting) motion transfer between the sprocket wheels on parallel shafts. The purpose of the adjustments of chain gear is to remove the unwanted effects by using the chain guide on the links (sliding guide rail) ensuring a smooth fit of the chain rollers into the wheel tooth gap.

  6. Adjustment to Recruit Training.

    ERIC Educational Resources Information Center

    Anderson, Betty S.

    The thesis examines problems of adjustment encountered by new recruits entering the military services. Factors affecting adjustment are discussed: the recruit training staff and environment, recruit background characteristics, the military's image, the changing values and motivations of today's youth, and the recruiting process. Sources of…

  7. Covariant constraints in ghost free massive gravity

    SciTech Connect

    Deffayet, C.; Mourad, J.; Zahariade, G. E-mail: mourad@apc.univ-paris7.fr

    2013-01-01

    We show that the reformulation of the de Rham-Gabadadze-Tolley massive gravity theory using vielbeins leads to a very simple and covariant way to count constraints, and hence degrees of freedom. Our method singles out a subset of theories, in the de Rham-Gabadadze-Tolley family, where an extra constraint, needed to eliminate the Boulware Deser ghost, is easily seen to appear. As a side result, we also introduce a new method, different from the Stuckelberg trick, to extract kinetic terms for the polarizations propagating in addition to those of the massless graviton.

  8. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  9. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  10. Covariant harmonic oscillators and coupled harmonic oscillators

    NASA Technical Reports Server (NTRS)

    Han, Daesoo; Kim, Young S.; Noz, Marilyn E.

    1995-01-01

    It is shown that the system of two coupled harmonic oscillators shares the basic symmetry properties with the covariant harmonic oscillator formalism which provides a concise description of the basic features of relativistic hadronic features observed in high-energy laboratories. It is shown also that the coupled oscillator system has the SL(4,r) symmetry in classical mechanics, while the present formulation of quantum mechanics can accommodate only the Sp(4,r) portion of the SL(4,r) symmetry. The possible role of the SL(4,r) symmetry in quantum mechanics is discussed.

  11. Inferring Meta-covariates in Classification

    NASA Astrophysics Data System (ADS)

    Harris, Keith; McMillan, Lisa; Girolami, Mark

    This paper develops an alternative method for gene selection that combines model based clustering and binary classification. By averaging the covariates within the clusters obtained from model based clustering, we define “meta-covariates” and use them to build a probit regression model, thereby selecting clusters of similarly behaving genes, aiding interpretation. This simultaneous learning task is accomplished by an EM algorithm that optimises a single likelihood function which rewards good performance at both classification and clustering. We explore the performance of our methodology on a well known leukaemia dataset and use the Gene Ontology to interpret our results.

  12. Cosmology of a Covariant Galileon Field

    NASA Astrophysics Data System (ADS)

    de Felice, Antonio; Tsujikawa, Shinji

    2010-09-01

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  13. Methods and Issues for the Combined Use of Integral Experiments and Covariance Data: Results of a NEA International Collaborative Study

    SciTech Connect

    Palmiotti, Giuseppe; Salvatores, Massimo

    2014-04-01

    The Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD) established a Subgroup (called “Subgroup 33”) in 2009 on “Methods and issues for the combined use of integral experiments and covariance data.” The first stage was devoted to producing the description of different adjustment methodologies and assessing their merits. A detailed document related to this first stage has been issued. Nine leading organizations (often with a long and recognized expertise in the field) have contributed: ANL, CEA, INL, IPPE, JAEA, JSI, NRG, IRSN and ORNL. In the second stage a practical benchmark exercise was defined in order to test the reliability of the nuclear data adjustment methodology. A comparison of the results obtained by the participants and major lessons learned in the exercise are discussed in the present paper that summarizes individual contributions which often include several original developments not reported separately. The paper provides the analysis of the most important results of the adjustment of the main nuclear data of 11 major isotopes in a 33-group energy structure. This benchmark exercise was based on a set of 20 well defined integral parameters from 7 fast assembly experiments. The exercise showed that using a common shared set of integral experiments but different starting evaluated libraries and/or different covariance matrices, there is a good convergence of trends for adjustments. Moreover, a significant reduction of the original uncertainties is often observed. Using the a–posteriori covariance data, there is a strong reduction of the uncertainties of integral parameters for reference reactor designs, mainly due to the new correlations in the a–posteriori covariance matrix. Furthermore, criteria have been proposed and applied to verify the consistency of differential and integral data used in the adjustment. Finally, recommendations are given

  14. Methods and Issues for the Combined Use of Integral Experiments and Covariance Data: Results of a NEA International Collaborative Study

    NASA Astrophysics Data System (ADS)

    Salvatores, M.; Palmiotti, G.; Aliberti, G.; Archier, P.; De Saint Jean, C.; Dupont, E.; Herman, M.; Ishikawa, M.; Ivanova, T.; Ivanov, E.; Kim, S.-J.; Kodeli, I.; Manturov, G.; McKnight, R.; Pelloni, S.; Perfetti, C.; Plompen, A. J. M.; Rearden, B. T.; Rochman, D.; Sugino, K.; Trkov, A.; Wang, W.; Wu, H.; Yang, W.-S.

    2014-04-01

    The Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD) established a Subgroup (called "Subgroup 33") in 2009 on "Methods and issues for the combined use of integral experiments and covariance data." The first stage was devoted to producing the description of different adjustment methodologies and assessing their merits. A detailed document related to this first stage has been issued. Nine leading organizations (often with a long and recognized expertise in the field) have contributed: ANL, CEA, INL, IPPE, JAEA, JSI, NRG, IRSN and ORNL. In the second stage a practical benchmark exercise was defined in order to test the reliability of the nuclear data adjustment methodology. A comparison of the results obtained by the participants and major lessons learned in the exercise are discussed in the present paper that summarizes individual contributions which often include several original developments not reported separately. The paper provides the analysis of the most important results of the adjustment of the main nuclear data of 11 major isotopes in a 33-group energy structure. This benchmark exercise was based on a set of 20 well defined integral parameters from 7 fast assembly experiments. The exercise showed that using a common shared set of integral experiments but different starting evaluated libraries and/or different covariance matrices, there is a good convergence of trends for adjustments. Moreover, a significant reduction of the original uncertainties is often observed. Using the a-posteriori covariance data, there is a strong reduction of the uncertainties of integral parameters for reference reactor designs, mainly due to the new correlations in the a-posteriori covariance matrix. Furthermore, criteria have been proposed and applied to verify the consistency of differential and integral data used in the adjustment. Finally, recommendations are given for an

  15. Parametric and non-parametric modeling of short-term synaptic plasticity. Part I: computational study

    PubMed Central

    Marmarelis, Vasilis Z.; Berger, Theodore W.

    2009-01-01

    Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609

  16. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  17. Non-parametric seismic hazard analysis in the presence of incomplete data

    NASA Astrophysics Data System (ADS)

    Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush

    2016-07-01

    The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.

  18. A nonparametric method of multi-step ahead forecasting in diffusion processes

    NASA Astrophysics Data System (ADS)

    Yamamura, Mariko; Shoji, Isao

    2010-06-01

    This paper provides a nonparametric model of multi-step ahead forecasting in diffusion processes. The model is constructed from the local linear model with the Gaussian kernel. The paper provides simulation studies to evaluate its performance of multi-step ahead forecasting by comparing with the global linear model, showing the better forecasting performance of the nonparametric model than the global linear model. The paper also conducts empirical analysis for forecasting using intraday data of the Japanese stock price index and the time series of heart rates. The result shows the performance of forecasting does not differ much in the Japanese stock price index, but that the nonparametric model shows significantly better performance in the analysis of the heart rates.

  19. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  20. Computer codes for checking, plotting and processing of neutron cross-section covariance data and their application

    SciTech Connect

    Sartori, E.

    1992-12-31

    This paper presents a brief review of computer codes concerned with checking, plotting, processing and using of covariances of neutron cross-section data. It concentrates on those available from the computer code information centers of the United States and the OECD/Nuclear Energy Agency. Emphasis will be placed also on codes using covariances for specific applications such as uncertainty analysis, data adjustment and data consistency analysis. Recent evaluations contain neutron cross section covariance information for all isotopes of major importance for technological applications of nuclear energy. It is therefore important that the available software tools needed for taking advantage of this information are widely known as hey permit the determination of better safety margins and allow the optimization of more economic, I designs of nuclear energy systems.

  1. Marital, reproductive, and educational behaviors covary with life expectancy.

    PubMed

    Krupp, Daniel Brian

    2012-12-01

    Theories of "life history evolution" suggest that individuals might adjust the timing of marriage and reproduction, as well as their propensity to terminate a marriage or pregnancy and invest in skill development, in response to indicators of the locally prevailing level of life expectancy. In particular, such theories generate the hypothesis that foreshortened time horizons lead to hastened reproduction and marriage whereas lengthier time horizons increase the likelihood of reproductive and marital termination and lead to greater investment in education. Here, I show that the scheduling and occurrence of marital and reproductive behavior (including both initiation and termination), as well as levels of educational attainment and investment, covary with life expectancy, even after controlling for the effects of affluence. In analyses of variation in marital, reproductive, and educational behaviors at two jurisdictional levels in Canada, life expectancy was positively correlated with patterns of age-specific fertility, age at first marriage, divorce, abortion, conferral of high school and higher education degrees (with the exception of the trades) and mean number of years of schooling. The large and highly consistent relationships observed between life expectancy and the behaviors under investigation suggest that these associations may be mediated by individual "perceptions" of life expectancy, though more research is needed before conclusions can be firmly reached. PMID:22484517

  2. Environmental dynamics modulate covariation of choice and timing.

    PubMed

    Subramaniam, Shrinidhi; Kyonka, Elizabeth G E

    2016-03-01

    Response allocation between delayed reinforcers is presumably a function of the discrimination of those delays. In the present experiment, we analyzed the functional relation between response allocation and temporal discrimination across different environmental dynamics. Three pigeons pecked for food in a concurrent-chain schedule. Concurrent variable-interval initial links produced fixed-interval (FI) terminal links. Start and stop times, single-trial measures of temporal discrimination, were obtained from occasional 'no-food' terminal links. In dynamic, rapid-acquisition conditions, terminal links were FI 10s and 20s and the location of the initial link leading to the shorter terminal link varied unpredictably across sessions. In the static conditions, both terminal links were either "uniform" FI 15-s schedules or one terminal link was "fixed" at FI 10s and the other at 20s. Response allocation and start and stop times adjusted within sessions in dynamic conditions and across sessions of static conditions. Residuals from regressions of expected on programmed immediacy ratios were positively correlated to a greater magnitude in dynamic than static conditions. This change in residual covariation demonstrated that environmental dynamics modulated the relation between choice and timing. PMID:26802787

  3. Comparative Analysis of Evapotranspiration Using Eddy Covariance

    NASA Astrophysics Data System (ADS)

    BAE, H.; Ji, H.; Lee, B.; Nam, K.; Jang, B.; Lee, C.; Jung, H.

    2013-12-01

    The eddy covariance method has been widely used to quantify evapotranspiration. However, independent measurements of energy components such as latent heat flux, sensible heat flux often lead to under-measurements, this is commonly known as a lack of closure of the surface energy balance. In response to this methodological problem, this study is addressed specifically to correction of the latent and heat sensible fluxes. The energy components observed in agricultural and grassland from January 2013 were measured using the eddy covariance method. As a result of the comparison of the available energy (Rn-G) with the sum of the latent and sensible heat fluxes, R-Squared values were 0.72 in the agricultural land, 0.78 in the grassland, indicating that the latent and sensible heat fluxes were under-measured. The obtained latent and sensible heat fluxes were then modified using the Bowen-ratio closure method. After this correction process, the values of the sum of the latent and sensible heat fluxes have increased by 39.7 percent in the agricultural land, 32.2 percent in the grassland respectively. Evapotranspiration will be calculated with both the unmodified and modified latent heat flux values, the results will be then thoroughly compared. The results will be finally verified by comparison with evapotranspiration obtained from energy balance based model.

  4. Noisy covariance matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, S.; Kondor, I.

    2002-05-01

    According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.

  5. Estimating the power spectrum covariance matrix with fewer mock samples

    NASA Astrophysics Data System (ADS)

    Pearson, David W.; Samushia, Lado

    2016-03-01

    The covariance matrices of power-spectrum (P(k)) measurements from galaxy surveys are difficult to compute theoretically. The current best practice is to estimate covariance matrices by computing a sample covariance of a large number of mock catalogues. The next generation of galaxy surveys will require thousands of large volume mocks to determine the covariance matrices to desired accuracy. The errors in the inverse covariance matrix are larger and scale with the number of P(k) bins, making the problem even more acute. We develop a method of estimating covariance matrices using a theoretically justified, few-parameter model, calibrated with mock catalogues. Using a set of 600 BOSS DR11 mock catalogues, we show that a seven parameter model is sufficient to fit the covariance matrix of BOSS DR11 P(k) measurements. The covariance computed with this method is better than the sample covariance at any number of mocks and only ˜100 mocks are required for it to fully converge and the inverse covariance matrix converges at the same rate. This method should work equally well for the next generation of galaxy surveys, although a demand for higher accuracy may require adding extra parameters to the fitting function.

  6. Impact of the 235U Covariance Data in Benchmark Calculations

    SciTech Connect

    Leal, Luiz C; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems.

  7. Least squares regression methods for clustered ROC data with discrete covariates.

    PubMed

    Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton

    2016-07-01

    The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods. PMID:26848938

  8. Variable selection in covariate dependent random partition models: an application to urinary tract infection.

    PubMed

    Barcella, William; Iorio, Maria De; Baio, Gianluca; Malone-Lee, James

    2016-04-15

    Lower urinary tract symptoms can indicate the presence of urinary tract infection (UTI), a condition that if it becomes chronic requires expensive and time consuming care as well as leading to reduced quality of life. Detecting the presence and gravity of an infection from the earliest symptoms is then highly valuable. Typically, white blood cell (WBC) count measured in a sample of urine is used to assess UTI. We consider clinical data from 1341 patients in their first visit in which UTI (i.e. WBC ≥ 1) is diagnosed. In addition, for each patient, a clinical profile of 34 symptoms was recorded. In this paper, we propose a Bayesian nonparametric regression model based on the Dirichlet process prior aimed at providing the clinicians with a meaningful clustering of the patients based on both the WBC (response variable) and possible patterns within the symptoms profiles (covariates). This is achieved by assuming a probability model for the symptoms as well as for the response variable. To identify the symptoms most associated to UTI, we specify a spike and slab base measure for the regression coefficients: this induces dependence of symptoms selection on cluster assignment. Posterior inference is performed through Markov Chain Monte Carlo methods. PMID:26536840

  9. A general non-parametric classifier applied to discriminating surface water from terrain shadows

    NASA Technical Reports Server (NTRS)

    Eppler, W. G.

    1975-01-01

    A general non-parametric classifier is described in the context of discriminating surface water from terrain shadows. In addition to using non-parametric statistics, this classifier permits the use of a cost matrix to assign different penalties to various types of misclassifications. The approach also differs from conventional classifiers in that it applies the maximum-likelihood criterion to overall class probabilities as opposed to the standard practice of choosing the most likely individual subclass. The classifier performance is evaluated using two different effectiveness measures for a specific set of ERTS data.

  10. Two bootstrapping routines for obtaining imprecision estimates for nonparametric parameter distributions in nonlinear mixed effects models.

    PubMed

    Baverel, Paul G; Savic, Radojka M; Karlsson, Mats O

    2011-02-01

    When parameter estimates are used in predictions or decisions, it is important to consider the magnitude of imprecision associated with the estimation. Such imprecision estimates are, however, presently lacking for nonparametric algorithms intended for nonlinear mixed effects models. The objective of this study was to develop resampling-based methods for estimating imprecision in nonparametric distribution (NPD) estimates obtained in NONMEM. A one-compartment PK model was used to simulate datasets for which the random effect of clearance conformed to a (i) normal (ii) bimodal and (iii) heavy-tailed underlying distributional shapes. Re-estimation was conducted assuming normality under FOCE, and NPDs were estimated sequential to this step. Imprecision in the NPD was then estimated by means of two different resampling procedures. The first (full) method relies on bootstrap sampling from the raw data and a re-estimation of both the preceding parametric (FOCE) and the nonparametric step. The second (simplified) method relies on bootstrap sampling of individual nonparametric probability distributions. Nonparametric 95% confidence intervals (95% CIs) were obtained and mean errors (MEs) of the 95% CI width were computed. Standard errors (SEs) of nonparametric population estimates were obtained using the simplified method and evaluated through 100 stochastic simulations followed by estimations (SSEs). Both methods were successfully implemented to provide imprecision estimates for NPDs. The imprecision estimates adequately reflected the reference imprecision in all distributional cases and regardless of the numbers of individuals in the original data. Relative MEs of the 95% CI width of CL marginal density when original data contained 200 individuals were equal to: (i) -22 and -12%, (ii) -22 and -9%, (iii) -13 and -5% for the full and simplified (n = 100), respectively. SEs derived from the simplified method were consistent with the ones obtained from 100 SSEs. In conclusion

  11. Robust nonparametric estimation of monotone regression functions with interval-censored observations.

    PubMed

    Zhang, Ying; Cheng, Gang; Tu, Wanzhu

    2016-09-01

    Nonparametric estimation of monotone regression functions is a classical problem of practical importance. Robust estimation of monotone regression functions in situations involving interval-censored data is a challenging yet unresolved problem. Herein, we propose a nonparametric estimation method based on the principle of isotonic regression. Using empirical process theory, we show that the proposed estimator is asymptotically consistent under a specific metric. We further conduct a simulation study to evaluate the performance of the estimator in finite sample situations. As an illustration, we use the proposed method to estimate the mean body weight functions in a group of adolescents after they reach pubertal growth spurt. PMID:26757488

  12. Sparsity-inspired nonparametric probability characterization for radio propagation in body area networks.

    PubMed

    Yang, Xiaodong; Yang, Shuyuan; Abbasi, Qammer Hussain; Zhang, Zhiya; Ren, Aifeng; Zhao, Wei; Alomainy, Akram

    2015-05-01

    Parametric probability models are common references for channel characterization. However, the limited number of samples and uncertainty of the propagation scenario affect the characterization accuracy of parametric models for body area networks. In this paper, we propose a sparse nonparametric probability model for body area wireless channel characterization. The path loss and root-mean-square delay, which are significant wireless channel parameters, can be learned from this nonparametric model. A comparison with available parametric models shows that the proposed model is very feasible for the body area propagation environment and can be seen as a significant supplement to parametric approaches. PMID:25014979

  13. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    PubMed

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. PMID:24354490

  14. Quantification of model error via an interval model with nonparametric error bound

    NASA Technical Reports Server (NTRS)

    Lew, Jiann-Shiun; Keel, Lee H.; Juang, Jer-Nan

    1993-01-01

    The quantification of model uncertainty is becoming increasingly important as robust control is an important tool for control system design and analysis. This paper presents an algorithm that effectively characterizes the model uncertainty in terms of parametric and nonparametric uncertainties. The algorithm utilizes the frequency domain model error which is estimated from the spectra of output error and input data. The parametric uncertainty is represented as an interval transfer function while the nonparametric uncertainty is bounded by a designed error bound transfer function. Both discrete and continuous systems are discussed in this paper. The algorithm is applied to the Mini-Mast example, and the detail analysis is given.

  15. A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu

    2007-01-01

    Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…

  16. SLIT ADJUSTMENT CLAMP

    DOEpatents

    McKenzie, K.R.

    1959-07-01

    An electrode support which permits accurate alignment and adjustment of the electrode in a plurality of planes and about a plurality of axes in a calutron is described. The support will align the slits in the electrode with the slits of an ionizing chamber so as to provide for the egress of ions. The support comprises an insulator, a leveling plate carried by the insulator and having diametrically opposed attaching screws screwed to the plate and the insulator and diametrically opposed adjusting screws for bearing against the insulator, and an electrode associated with the plate for adjustment therewith.

  17. CMS Frailty Adjustment Model

    PubMed Central

    Kautter, John; Pope, Gregory C.

    2004-01-01

    The authors document the development of the CMS frailty adjustment model, a Medicare payment approach that adjusts payments to a Medicare managed care organization (MCO) according to the functional impairment of its community-residing enrollees. Beginning in 2004, this approach is being applied to certain organizations, such as Program of All-Inclusive Care for the Elderly (PACE), that specialize in providing care to the community-residing frail elderly. In the future, frailty adjustment could be extended to more Medicare managed care organizations. PMID:25372243

  18. Using a DEA Management Tool through a Nonparametric Approach: An Examination of Urban-Rural Effects on Thai School Efficiency

    ERIC Educational Resources Information Center

    Kantabutra, Sangchan

    2009-01-01

    This paper examines urban-rural effects on public upper-secondary school efficiency in northern Thailand. In the study, efficiency was measured by a nonparametric technique, data envelopment analysis (DEA). Urban-rural effects were examined through a Mann-Whitney nonparametric statistical test. Results indicate that urban schools appear to have…

  19. Software to use the non-parametric k-nearest neighbor approach to estimate soil water retention

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-parametric approaches are being used in various fields to address classification type problems, as well as to estimate continuous variables. One type of the non-parametric lazy learning algorithms, a k-Nearest Neighbor (k-NN) algorithm has been applied as a pedotransfer technique to estimate wat...

  20. Remotely Adjustable Hydraulic Pump

    NASA Technical Reports Server (NTRS)

    Kouns, H. H.; Gardner, L. D.

    1987-01-01

    Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.

  1. Covariates of intravenous paracetamol pharmacokinetics in adults

    PubMed Central

    2014-01-01

    Background Pharmacokinetic estimates for intravenous paracetamol in individual adult cohorts are different to a certain extent, and understanding the covariates of these differences may guide dose individualization. In order to assess covariate effects of intravenous paracetamol disposition in adults, pharmacokinetic data on discrete studies were pooled. Methods This pooled analysis was based on 7 studies, resulting in 2755 time-concentration observations in 189 adults (mean age 46 SD 23 years; weight 73 SD 13 kg) given intravenous paracetamol. The effects of size, age, pregnancy and other clinical settings (intensive care, high dependency, orthopaedic or abdominal surgery) on clearance and volume of distribution were explored using non-linear mixed effects models. Results Paracetamol disposition was best described using normal fat mass (NFM) with allometric scaling as a size descriptor. A three-compartment linear disposition model revealed that the population parameter estimates (between subject variability,%) were central volume (V1) 24.6 (55.5%) L/70 kg with peripheral volumes of distribution V2 23.1 (49.6%) L/70 kg and V3 30.6 (78.9%) L/70 kg. Clearance (CL) was 16.7 (24.6%) L/h/70 kg and inter-compartment clearances were Q2 67.3 (25.7%) L/h/70 kg and Q3 2.04 (71.3%) L/h/70 kg. Clearance and V2 decreased only slightly with age. Sex differences in clearance were minor and of no significance. Clearance, relative to median values, was increased during pregnancy (FPREG = 1.14) and decreased during abdominal surgery (FABDCL = 0.715). Patients undergoing orthopaedic surgery had a reduced V2 (FORTHOV = 0.649), while those in intensive care had increased V2 (FICV = 1.51). Conclusions Size and age are important covariates for paracetamol pharmacokinetics explaining approximately 40% of clearance and V2 variability. Dose individualization in adult subpopulations would achieve little benefit in the scenarios explored. PMID:25342929

  2. Flux Partitioning by Isotopic Eddy Covariance

    NASA Astrophysics Data System (ADS)

    Wehr, R.; Munger, J. W.; Nelson, D. D.; McManus, J. B.; Zahniser, M. S.; Wofsy, S. C.; Saleska, S. R.

    2011-12-01

    Net ecosystem-atmosphere exchange of CO2 is routinely measured by eddy covariance at sites around the world, but studies of ecosystem processes are more interested in the gross photosynthetic and respiratory fluxes that comprise the net flux. The standard method of partitioning the net flux into these components has been to extrapolate nighttime respiration into daytime based on a relationship between nighttime respiration, temperature, and sometimes moisture. However, such relationships generally account for only a small portion of the variation in nighttime respiration, and the assumption that they can predict respiration throughout the day is dubious. A promising alternate method, known as isotopic flux partitioning, works by identifying the stable isotopic signatures of photosynthesis and respiration in the CO2 flux. We have used this method to partition the net flux at Harvard Forest, MA, based on eddy covariance measurements of the net 12CO2 and 13CO2 fluxes (as well as measurements of the sensible and latent heat fluxes and other meteorological variables). The CO2 isotopologues were measured at 4 Hz by an Aerodyne quantum cascade laser spectrometer with a δ13C precision of 0.4 % in 0.25 sec and 0.02 % in 100 sec. In the absence of such high-frequency, high-precision isotopic measurements, past attempts at isotopic flux partitioning have combined isotopic flask measurements with high-frequency (total) CO2 measurements to estimate the isoflux (the EC/flask approach). Others have used a conditional flask sampling approach called hyperbolic relaxed eddy accumulation (HREA). We 'sampled' our data according to each of these approaches, for comparison, and found disagreement in the calculated fluxes of ~10% for the EC/flask approach, and ~30% for HREA, at midday. To our knowledge, this is the first example of flux partitioning by isotopic eddy covariance. Wider use of this method, enabled by a new generation of laser spectrometers, promises to open a new window

  3. Electromagnetic structure of the Delta baryon within the covariant spectator theory

    SciTech Connect

    M. T. Pena, G. Ramalho, Franz Gross

    2010-12-01

    We calculated all the electromagnetic observables for the nucleon and its lowest-lying Delta(1232) excitation within a constituent quark model for those two baryons based on the covariant spectator theory. Once the reactions gamma N \\to N and gamma N \\to Delta were described, we predicted without further adjusting of parameters the four electromagnetic Delta form factors: the electric charge G_{E0}, the magnetic dpole G_{M1}, the electric quadrupole G_{E2} and the magnetic octupole G_{M3}. The results are compatible with the available experimental data and recent lattice QCD data.

  4. Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)

    2016-01-01

    Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.

  5. Multi-atom entanglement engineering and phase-covariant cloning via adiabatic passage

    NASA Astrophysics Data System (ADS)

    Zheng, Shi-Biao

    2005-05-01

    A scheme is proposed for the generation of entangled states for multiple atoms trapped in an optical cavity via adiabatic passage. In the scheme, both the atomic system and the cavity have no probability of being excited. Thus both the atomic spontaneous emission and the cavity decay are suppressed, which makes our scheme very robust against decoherence. Taking advantage of the adiabatic process, the scheme does not require accurate adjustment of the interaction time. Furthermore, the scheme is scalable in principle. The idea can also be used for realizing phase-covariant cloning.

  6. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  7. Covariant entropy bound and loop quantum cosmology

    SciTech Connect

    Ashtekar, Abhay; Wilson-Ewing, Edward

    2008-09-15

    We examine Bousso's covariant entropy bound conjecture in the context of radiation filled, spatially flat, Friedmann-Robertson-Walker models. The bound is violated near the big bang. However, the hope has been that quantum gravity effects would intervene and protect it. Loop quantum cosmology provides a near ideal setting for investigating this issue. For, on the one hand, quantum geometry effects resolve the singularity and, on the other hand, the wave function is sharply peaked at a quantum corrected but smooth geometry, which can supply the structure needed to test the bound. We find that the bound is respected. We suggest that the bound need not be an essential ingredient for a quantum gravity theory but may emerge from it under suitable circumstances.

  8. Covariant Lyapunov analysis of chaotic Kolmogorov flows.

    PubMed

    Inubushi, Masanobu; Kobayashi, Miki U; Takehiro, Shin-ichi; Yamada, Michio

    2012-01-01

    Hyperbolicity is an important concept in dynamical system theory; however, we know little about the hyperbolicity of concrete physical systems including fluid motions governed by the Navier-Stokes equations. Here, we study numerically the hyperbolicity of the Navier-Stokes equation on a two-dimensional torus (Kolmogorov flows) using the method of covariant Lyapunov vectors developed by Ginelli et al. [Phys. Rev. Lett. 99, 130601 (2007)]. We calculate the angle between the local stable and unstable manifolds along an orbit of chaotic solution to evaluate the hyperbolicity. We find that the attractor of chaotic Kolmogorov flows is hyperbolic at small Reynolds numbers, but that smaller angles between the local stable and unstable manifolds are observed at larger Reynolds numbers, and the attractor appears to be nonhyperbolic at a certain Reynolds numbers. Also, we observed some relations between these hyperbolic properties and physical properties such as time correlation of the vorticity and the energy dissipation rate. PMID:22400681

  9. Generation of phase-covariant quantum cloning

    SciTech Connect

    Karimipour, V.; Rezakhani, A.T.

    2002-11-01

    It is known that in phase-covariant quantum cloning, the equatorial states on the Bloch sphere can be cloned with a fidelity higher than the optimal bound established for universal quantum cloning. We generalize this concept to include other states on the Bloch sphere with a definite z component of spin. It is shown that once we know the z component, we can always clone a state with a fidelity higher than the universal value and that of equatorial states. We also make a detailed study of the entanglement properties of the output copies and show that the equatorial states are the only states that give rise to a separable density matrix for the outputs.

  10. EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.

    SciTech Connect

    HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.

    2007-04-22

    The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.

  11. Covariant chronogeometry and extreme distances: Elementary particles

    PubMed Central

    Segal, I. E.; Jakobsen, H. P.; Ørsted, B.; Paneitz, S. M.; Speh, B.

    1981-01-01

    We study a variant of elementary particle theory in which Minkowski space, M0, is replaced by a natural alternative, the unique four-dimensional manifold ¯M with comparable properties of causality and symmetry. Free particles are considered to be associated (i) with positive-energy representations in bundles of prescribed spin over ¯M of the group of causality-preserving transformations on ¯M (or its mass-conserving subgroup) and (ii) with corresponding wave equations. In this study these bundles, representations, and equations are detailed, and some of their basic features are developed in the cases of spins 0 and ½. Preliminaries to a general study are included; issues of covariance, unitarity, and positivity of the energy are treated; appropriate quantum numbers are indicated; and possible physical applications are discussed. PMID:16593075

  12. Covariant generalization of cosmological perturbation theory

    SciTech Connect

    Enqvist, Kari; Hoegdahl, Janne; Nurmi, Sami; Vernizzi, Filippo

    2007-01-15

    We present an approach to cosmological perturbations based on a covariant perturbative expansion between two worldlines in the real inhomogeneous universe. As an application, at an arbitrary order we define an exact scalar quantity which describes the inhomogeneities in the number of e-folds on uniform density hypersurfaces and which is conserved on all scales for a barotropic ideal fluid. We derive a compact form for its conservation equation at all orders and assign it a simple physical interpretation. To make a comparison with the standard perturbation theory, we develop a method to construct gauge-invariant quantities in a coordinate system at arbitrary order, which we apply to derive the form of the nth order perturbation in the number of e-folds on uniform density hypersurfaces and its exact evolution equation. On large scales, this provides the gauge-invariant expression for the curvature perturbation on uniform density hypersurfaces and its evolution equation at any order.

  13. Conformal killing tensors and covariant Hamiltonian dynamics

    SciTech Connect

    Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.

    2014-12-15

    A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.

  14. Covariant non-commutative space-time

    NASA Astrophysics Data System (ADS)

    Heckman, Jonathan J.; Verlinde, Herman

    2015-05-01

    We introduce a covariant non-commutative deformation of 3 + 1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space-time isometries. The non-commutative algebra is defined on space-times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so (5, 1), while for AdS4 it assembles into so (4, 2). The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.

  15. A covariance analysis algorithm for interconnected systems

    NASA Technical Reports Server (NTRS)

    Cheng, Victor H. L.; Curley, Robert D.; Lin, Ching-An

    1987-01-01

    A covariance analysis algorithm for propagation of signal statistics in arbitrarily interconnected nonlinear systems is presented which is applied to six-degree-of-freedom systems. The algorithm uses statistical linearization theory to linearize the nonlinear subsystems, and the resulting linearized subsystems are considered in the original interconnection framework for propagation of the signal statistics. Some nonlinearities commonly encountered in six-degree-of-freedom space-vehicle models are referred to in order to illustrate the limitations of this method, along with problems not encountered in standard deterministic simulation analysis. Moreover, the performance of the algorithm shall be numerically exhibited by comparing results using such techniques to Monte Carlo analysis results, both applied to a simple two-dimensional space-intercept problem.

  16. A covariant treatment of cosmic parallax

    SciTech Connect

    Räsänen, Syksy

    2014-03-01

    The Gaia satellite will soon probe parallax on cosmological distances. Using the covariant formalism and considering the angle between a pair of sources, we find parallax for both spacelike and timelike separation between observation points. Our analysis includes both intrinsic parallax and parallax due to observer motion. We propose a consistency condition that tests the FRW metric using the parallax distance and the angular diameter distance. This test is purely kinematic and relies only on geometrical optics, it is independent of matter content and its relation to the spacetime geometry. We study perturbations around the FRW model, and find that they should be taken into account when analysing observations to determine the parallax distance.

  17. Performance of internal covariance estimators for cosmic shear correlation functions

    SciTech Connect

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.

  18. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGESBeta

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  19. A Product Partition Model With Regression on Covariates

    PubMed Central

    Müller, Peter; Quintana, Fernando; Rosner, Gary L.

    2011-01-01

    We propose a probability model for random partitions in the presence of covariates. In other words, we develop a model-based clustering algorithm that exploits available covariates. The motivating application is predicting time to progression for patients in a breast cancer trial. We proceed by reporting a weighted average of the responses of clusters of earlier patients. The weights should be determined by the similarity of the new patient’s covariate with the covariates of patients in each cluster. We achieve the desired inference by defining a random partition model that includes a regression on covariates. Patients with similar covariates are a priori more likely to be clustered together. Posterior predictive inference in this model formalizes the desired prediction. We build on product partition models (PPM). We define an extension of the PPM to include a regression on covariates by including in the cohesion function a new factor that increases the probability of experimental units with similar covariates to be included in the same cluster. We discuss implementations suitable for any combination of continuous, categorical, count, and ordinal covariates. An implementation of the proposed model as R-package is available for download. PMID:21566678

  20. How much do genetic covariances alter the rate of adaptation?

    PubMed Central

    Agrawal, Aneil F.; Stinchcombe, John R.

    2008-01-01

    Genetically correlated traits do not evolve independently, and the covariances between traits affect the rate at which a population adapts to a specified selection regime. To measure the impact of genetic covariances on the rate of adaptation, we compare the rate fitness increases given the observed G matrix to the expected rate if all the covariances in the G matrix are set to zero. Using data from the literature, we estimate the effect of genetic covariances in real populations. We find no net tendency for covariances to constrain the rate of adaptation, though the quality and heterogeneity of the data limit the certainty of this result. There are some examples in which covariances strongly constrain the rate of adaptation but these are balanced by counter examples in which covariances facilitate the rate of adaptation; in many cases, covariances have little or no effect. We also discuss how our metric can be used to identify traits or suites of traits whose genetic covariances to other traits have a particularly large impact on the rate of adaptation. PMID:19129097

  1. The Probability of Exceedance as a Nonparametric Person-Fit Statistic for Tests of Moderate Length

    ERIC Educational Resources Information Center

    Tendeiro, Jorge N.; Meijer, Rob R.

    2013-01-01

    To classify an item score pattern as not fitting a nonparametric item response theory (NIRT) model, the probability of exceedance (PE) of an observed response vector x can be determined as the sum of the probabilities of all response vectors that are, at most, as likely as x, conditional on the test's total score. Vector x is to be considered…

  2. Does Private Tutoring Work? The Effectiveness of Private Tutoring: A Nonparametric Bounds Analysis

    ERIC Educational Resources Information Center

    Hof, Stefanie

    2014-01-01

    Private tutoring has become popular throughout the world. However, evidence for the effect of private tutoring on students' academic outcome is inconclusive; therefore, this paper presents an alternative framework: a nonparametric bounds method. The present examination uses, for the first time, a large representative data-set in a European…

  3. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  4. A Comparison of Robust and Nonparametric Estimators under the Simple Linear Regression Model.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Tam, Hak P.

    This study investigates parameter estimation under the simple linear regression model for situations in which the underlying assumptions of ordinary least squares estimation are untenable. Classical nonparametric estimation methods are directly compared against some robust estimation methods for conditions in which varying degrees of outliers are…

  5. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  6. Performances and Spending Efficiency in Higher Education: A European Comparison through Non-Parametric Approaches

    ERIC Educational Resources Information Center

    Agasisti, Tommaso

    2011-01-01

    The objective of this paper is an efficiency analysis concerning higher education systems in European countries. Data have been extracted from OECD data-sets (Education at a Glance, several years), using a non-parametric technique--data envelopment analysis--to calculate efficiency scores. This paper represents the first attempt to conduct such an…

  7. A Unified Nonparametric IRT Model for "d"-Dimensional Psychological Test Data ("d"-Isop)

    ERIC Educational Resources Information Center

    Scheiblechner, Hartmann

    2007-01-01

    The (univariate) isotonic psychometric (ISOP) model (Scheiblechner, 1995) is a nonparametric IRT model for dichotomous and polytomous (rating scale) psychological test data. A weak subject independence axiom W1 postulates that the subjects are ordered in the same way except for ties (i.e., similarly or isotonically) by all items of a psychological…

  8. MONTE CARLO STUDIES OF THE CLASSIFICATIONS MADE BY NONPARAMETRIC LINEAR DISCRIMINANT FUNCTIONS

    EPA Science Inventory

    Chance factors in pattern recognition studies utilizing nonparametric linear discriminant functions are examined. The relationship between complete linear separation of a data set and the dimensionality of the study is well-known. Also, due to the nature of the inequalities from ...

  9. Separating the Air Quality Impact of a Major Highway and Nearby Sources by Nonparametric Trajectory Analysis

    EPA Science Inventory

    Nonparametric Trajectory Analysis (NTA), a receptor-oriented model, was used to assess the impact of local sources of air pollution at monitoring sites located adjacent to highway I-15 in Las Vegas, NV. Measurements of black carbon, carbon monoxide, nitrogen oxides, and sulfur di...

  10. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  11. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  12. Measuring Youth Development: A Nonparametric Cross-Country "Youth Welfare Index"

    ERIC Educational Resources Information Center

    Chaaban, Jad M.

    2009-01-01

    This paper develops an empirical methodology for the construction of a synthetic multi-dimensional cross-country comparison of the performance of governments around the world in improving the livelihood of their younger population. The devised "Youth Welfare Index" is based on the nonparametric Data Envelopment Analysis (DEA) methodology and…

  13. Nonstationarity in seasonality of extreme precipitation: A nonparametric circular statistical approach and its application

    NASA Astrophysics Data System (ADS)

    Dhakal, Nirajan; Jain, Shaleen; Gray, Alexander; Dandy, Michael; Stancioff, Esperanza

    2015-06-01

    Changes in seasonality of extreme storms have important implications for public safety, storm water infrastructure, and, in general, adaptation strategies in a changing climate. While past research on this topic offers some approaches to characterize seasonality, the methods are somewhat limited in their ability to discern the diversity of distributional types for extreme precipitation dates. Herein, we present a comprehensive approach for assessment of temporal changes in the calendar dates for extreme precipitation within a circular statistics framework which entails: (a) three measures to summarize circular random variables (traditional approach), (b) four nonparametric statistical tests, and (c) a new nonparametric circular density method to provide a robust assessment of the nature of probability distribution and changes. Two 30 year blocks (1951-1980 and 1981-2010) of annual maximum daily precipitation from 10 stations across the state of Maine were used for our analysis. Assessment of seasonality based on nonparametric approach indicated nonstationarity; some stations exhibited shifts in significant mode toward Spring season for the recent time period while some other stations exhibited multimodal seasonal pattern for both the time periods. Nonparametric circular density method, used in this study, allows for an adaptive estimation of seasonal density. Despite the limitation of being sensitive to the smoothing parameter, this method can accurately characterize one or more modes of seasonal peaks, as well as pave the way toward assessment of changes in seasonality over time.

  14. Detection of Invalid Test Scores: The Usefulness of Simple Nonparametric Statistics

    ERIC Educational Resources Information Center

    Tendeiro, Jorge N.; Meijer, Rob R.

    2014-01-01

    In recent guidelines for fair educational testing it is advised to check the validity of individual test scores through the use of person-fit statistics. For practitioners it is unclear on the basis of the existing literature which statistic to use. An overview of relatively simple existing nonparametric approaches to identify atypical response…

  15. Power Analysis of Selected Parametric and Nonparametric Tests for Heterogeneous Variances in Non-Normal Distributions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    The present investigation developed power curves for two parametric and two nonparametric procedures for testing the equality of population variances. Both normal and non-normal distributions were considered for the two group design with equal and unequal sample frequencies. The results indicated that when population distributions differed only in…

  16. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.

    PubMed

    Fan, Jianqing; Feng, Yang; Song, Rui

    2011-06-01

    A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods. PMID:22279246

  17. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  18. A Nonparametric Test for Homogeneity of Variances: Application to GPAs of Students across Academic Majors

    ERIC Educational Resources Information Center

    Bakir, Saad T.

    2010-01-01

    We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…

  19. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  20. COMPARISON BETWEEN CONDITIONAL PROBABILITY FUNCTION AND NONPARAMETRIC REGRESSION FOR FINE PARTICLE SOURCE DIRECTIONS. (R831078)

    EPA Science Inventory

    The objective of this study is to examine the use of conditional probability function (CPF) and nonparametric regression (NPR) to identify directions of PM2.5 (particulate matter 2.5 m in aerodynamic diameter) sources using data collected from multiple monitoring sites across ...

  1. Do Former College Athletes Earn More at Work? A Nonparametric Assessment

    ERIC Educational Resources Information Center

    Henderson, Daniel J.; Olbrecht, Alexandre; Polachek, Solomon W.

    2006-01-01

    This paper investigates how students' collegiate athletic participation affects their subsequent labor market success. By using newly developed techniques in nonparametric regression, it shows that on average former college athletes earn a wage premium. However, the premium is not uniform, but skewed so that more than half the athletes actually…

  2. Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory

    ERIC Educational Resources Information Center

    Wells, Craig S.; Bolt, Daniel M.

    2008-01-01

    Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…

  3. Scale-Free Nonparametric Factor Analysis: A User-Friendly Introduction with Concrete Heuristic Examples.

    ERIC Educational Resources Information Center

    Mittag, Kathleen Cage

    Most researchers using factor analysis extract factors from a matrix of Pearson product-moment correlation coefficients. A method is presented for extracting factors in a non-parametric way, by extracting factors from a matrix of Spearman rho (rank correlation) coefficients. It is possible to factor analyze a matrix of association such that…

  4. High-precision covariant one-boson-exchange potentials for np scattering below 350 MeV

    SciTech Connect

    Franz Gross; Alfred Stadler

    2007-09-10

    Using the Covariant Spectator Theory (CST), we have found One-Boson-Exchange (OBE) potentials that fit the 2006 world np data below 350 MeV with a chi2/Ndata very close to 1, for a total of 3788 data. Our potentials have significantly fewer adjustable parameters than previous high-precision potentials, and they also reproduce the experimental triton binding energy without introducing additional irreducible three-nucleon forces.

  5. Nonparametric Residue Analysis of Dynamic PET Data With Application to Cerebral FDG Studies in Normals.

    PubMed

    O'Sullivan, Finbarr; Muzi, Mark; Spence, Alexander M; Mankoff, David M; O'Sullivan, Janet N; Fitzgerald, Niall; Newman, George C; Krohn, Kenneth A

    2009-06-01

    Kinetic analysis is used to extract metabolic information from dynamic positron emission tomography (PET) uptake data. The theory of indicator dilutions, developed in the seminal work of Meier and Zierler (1954), provides a probabilistic framework for representation of PET tracer uptake data in terms of a convolution between an arterial input function and a tissue residue. The residue is a scaled survival function associated with tracer residence in the tissue. Nonparametric inference for the residue, a deconvolution problem, provides a novel approach to kinetic analysis-critically one that is not reliant on specific compartmental modeling assumptions. A practical computational technique based on regularized cubic B-spline approximation of the residence time distribution is proposed. Nonparametric residue analysis allows formal statistical evaluation of specific parametric models to be considered. This analysis needs to properly account for the increased flexibility of the nonparametric estimator. The methodology is illustrated using data from a series of cerebral studies with PET and fluorodeoxyglucose (FDG) in normal subjects. Comparisons are made between key functionals of the residue, tracer flux, flow, etc., resulting from a parametric (the standard two-compartment of Phelps et al. 1979) and a nonparametric analysis. Strong statistical evidence against the compartment model is found. Primarily these differences relate to the representation of the early temporal structure of the tracer residence-largely a function of the vascular supply network. There are convincing physiological arguments against the representations implied by the compartmental approach but this is the first time that a rigorous statistical confirmation using PET data has been reported. The compartmental analysis produces suspect values for flow but, notably, the impact on the metabolic flux, though statistically significant, is limited to deviations on the order of 3%-4%. The general

  6. Conditional Covariance-Based Subtest Selection for DIMTEST

    ERIC Educational Resources Information Center

    Froelich, Amy G.; Habing, Brian

    2008-01-01

    DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…

  7. Covariant Spectator Theory: Foundations and Applications A Mini-Review of the Covariant Spectator Theory

    SciTech Connect

    Alfred Stadler, Franz Gross

    2010-10-01

    We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.

  8. Methods for estimation of covariance matrices and covariance components for the Hanford Waste Vitrification Plant Process

    SciTech Connect

    Bryan, M.F.; Piepel, G.F.; Simpson, D.B.

    1996-03-01

    The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.

  9. A pure $S$-wave covariant model for the nucleon

    SciTech Connect

    Franz Gross; G. Ramalho; M.T. Pena

    2008-01-01

    Using the manifestly covariant spectator theory, and modeling the nucleon as a system of three constituent quarks with their own electromagnetic structure, we show that all four nucleon electromagnetic form factors can be very well described by a manifestly covariant nucleon wave function with zero orbital angular momentum.

  10. Handling Correlations between Covariates and Random Slopes in Multilevel Models

    ERIC Educational Resources Information Center

    Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders

    2014-01-01

    This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…

  11. Covariation Is a Poor Measure of Molecular Coevolution.

    PubMed

    Talavera, David; Lovell, Simon C; Whelan, Simon

    2015-09-01

    Recent developments in the analysis of amino acid covariation are leading to breakthroughs in protein structure prediction, protein design, and prediction of the interactome. It is assumed that observed patterns of covariation are caused by molecular coevolution, where substitutions at one site affect the evolutionary forces acting at neighboring sites. Our theoretical and empirical results cast doubt on this assumption. We demonstrate that the strongest coevolutionary signal is a decrease in evolutionary rate and that unfeasibly long times are required to produce coordinated substitutions. We find that covarying substitutions are mostly found on different branches of the phylogenetic tree, indicating that they are independent events that may or may not be attributable to coevolution. These observations undermine the hypothesis that molecular coevolution is the primary cause of the covariation signal. In contrast, we find that the pairs of residues with the strongest covariation signal tend to have low evolutionary rates, and that it is this low rate that gives rise to the covariation signal. Slowly evolving residue pairs are disproportionately located in the protein's core, which explains covariation methods' ability to detect pairs of residues that are close in three dimensions. These observations lead us to propose the "coevolution paradox": The strength of coevolution required to cause coordinated changes means the evolutionary rate is so low that such changes are highly unlikely to occur. As modern covariation methods may lead to breakthroughs in structural genomics, it is critical to recognize their biases and limitations. PMID:25944916

  12. Application of covariant analytic mechanics to gravity with Dirac field

    NASA Astrophysics Data System (ADS)

    Nakajima, Satoshi

    2016-03-01

    We applied the covariant analytic mechanics with the differential forms to the Dirac field and the gravity with the Dirac field. The covariant analytic mechanics treats space and time on an equal footing regarding the differential forms as the basis variables. A significant feature of the covariant analytic mechanics is that the canonical equations, in addition to the Euler-Lagrange equation, are not only manifestly general coordinate covariant but also gauge covariant. Combining our study and the previous works (the scalar field, the abelian and non-abelian gauge fields and the gravity without the Dirac field), the applicability of the covariant analytic mechanics was checked for all fundamental fields. We studied both the first and second order formalism of the gravitational field coupled with matters including the Dirac field. It was suggested that gravitation theories including higher order curvatures cannot be treated by the second order formalism in the covariant analytic mechanics. In addition, we showed that the covariant analytic mechanics is equivalent to corrected De Donder-Weyl theory.

  13. CHANGING THE SUPPORT OF A SPATIAL COVARIATE: A SIMULATION STUDY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Researchers are increasingly able to capture spatially referenced data on both a response and a covariate more frequently and in more detail. A combination of geostatisical models and analysis of covariance methods is used to analyze such data. However, basic questions regarding the effects of using...

  14. Performance of internal covariance estimators for cosmic shear correlation functions

    NASA Astrophysics Data System (ADS)

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2016-03-01

    Data re-sampling methods such as delete-one jackknife, bootstrap or the sub-sample covariance are common tools for estimating the covariance of large-scale structure probes. We investigate different implementations of these methods in the context of cosmic shear two-point statistics. Using lognormal simulations of the convergence field and the corresponding shear field we generate mock catalogues of a known and realistic covariance. For a survey of {˜ } 5000 ° ^2 we find that jackknife, if implemented by deleting sub-volumes of galaxies, provides the most reliable covariance estimates. Bootstrap, in the common implementation of drawing sub-volumes of galaxies, strongly overestimates the statistical uncertainties. In a forecast for the complete 5-yr Dark Energy Survey, we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the Ωm-σ8 plane as measured with internally estimated covariance matrices is on average ≳85 per cent of the volume derived from the true covariance matrix. The uncertainty on the parameter combination Σ _8 ˜ σ _8 Ω _m^{0.5} derived from internally estimated covariances is ˜90 per cent of the true uncertainty.

  15. Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies

    ERIC Educational Resources Information Center

    Chen, Jianshen; Kaplan, David

    2015-01-01

    Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…

  16. The Regression Trunk Approach to Discover Treatment Covariate Interaction

    ERIC Educational Resources Information Center

    Dusseldorp, Elise; Meulman, Jacqueline J.

    2004-01-01

    The regression trunk approach (RTA) is an integration of regression trees and multiple linear regression analysis. In this paper RTA is used to discover treatment covariate interactions, in the regression of one continuous variable on a treatment variable with "multiple" covariates. The performance of RTA is compared to the classical method of…

  17. Conditional Covariance Theory and Detect for Polytomous Items

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2007-01-01

    This paper extends the theory of conditional covariances to polytomous items. It has been proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, given an appropriately chosen composite is positive if, and only if, the two items…

  18. Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Lee, Taehun; Cai, Li

    2012-01-01

    Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…

  19. The Role of Executive Functions Skills and Self-Regulation Behaviors in School Readiness and Adjustment

    ERIC Educational Resources Information Center

    Sasser, Tyler R.; Bierman, Karen L.

    2012-01-01

    The aim of this study was to examine co-variation in the development of self-regulatory skills evident in pre-kindergarten and evaluate the implications of that variation for school adjustment in kindergarten and first grade. Measures of self-regulatory skill development included: direct assessments of EF (executive function) (e.g., Peg Tapping,…

  20. Background error covariance estimation for atmospheric CO2 data assimilation

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhishek; Engelen, Richard J.; Kawa, Stephan R.; Sweeney, Colm; Michalak, Anna M.

    2013-09-01

    any data assimilation framework, the background error covariance statistics play the critical role of filtering the observed information and determining the quality of the analysis. For atmospheric CO2 data assimilation, however, the background errors cannot be prescribed via traditional forecast or ensemble-based techniques as these fail to account for the uncertainties in the carbon emissions and uptake, or for the errors associated with the CO2 transport model. We propose an approach where the differences between two modeled CO2 concentration fields, based on different but plausible CO2 flux distributions and atmospheric transport models, are used as a proxy for the statistics of the background errors. The resulting error statistics: (1) vary regionally and seasonally to better capture the uncertainty in the background CO2 field, and (2) have a positive impact on the analysis estimates by allowing observations to adjust predictions over large areas. A state-of-the-art four-dimensional variational (4D-VAR) system developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) is used to illustrate the impact of the proposed approach for characterizing background error statistics on atmospheric CO2 concentration estimates. Observations from the Greenhouse gases Observing SATellite "IBUKI" (GOSAT) are assimilated into the ECMWF 4D-VAR system along with meteorological variables, using both the new error statistics and those based on a traditional forecast-based technique. Evaluation of the four-dimensional CO2 fields against independent CO2 observations confirms that the performance of the data assimilation system improves substantially in the summer, when significant variability and uncertainty in the fluxes are present.

  1. UDU/T/ covariance factorization for Kalman filtering

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1980-01-01

    There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.

  2. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  3. Rural to Urban Adjustment

    ERIC Educational Resources Information Center

    Abramson, Jane A.

    Personal interviews with 100 former farm operators living in Saskatoon, Saskatchewan, were conducted in an attempt to understand the nature of the adjustment process caused by migration from rural to urban surroundings. Requirements for inclusion in the study were that respondents had owned or operated a farm for at least 3 years, had left their…

  4. Self adjusting inclinometer

    DOEpatents

    Hunter, Steven L.

    2002-01-01

    An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.

  5. Self Adjusting Sunglasses

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.

  6. Super-sample covariance in simulations

    NASA Astrophysics Data System (ADS)

    Li, Yin; Hu, Wayne; Takada, Masahiro

    2014-04-01

    Using separate universe simulations, we accurately quantify super-sample covariance (SSC), the typically dominant sampling error for matter power spectrum estimators in a finite volume, which arises from the presence of super survey modes. By quantifying the power spectrum response to a background mode, this approach automatically captures the separate effects of beat coupling in the quasilinear regime, halo sample variance in the nonlinear regime and a new dilation effect which changes scales in the power spectrum coherently across the survey volume, including the baryon acoustic oscillation scale. It models these effects at typically the few percent level or better with a handful of small volume simulations for any survey geometry compared with directly using many thousands of survey volumes in a suite of large-volume simulations. The stochasticity of the response is sufficiently small that in the quasilinear regime, SSC can be alternately included by fitting the mean density in the volume with these fixed templates in parameter estimation. We also test the halo model prescription and find agreement typically at better than the 10% level for the response.

  7. The Hopfield model revisited: covariance and quantization

    NASA Astrophysics Data System (ADS)

    Belgiorno, F.; Cacciatori, S. L.; Dalla Piazza, F.

    2016-01-01

    There are several possible applications of quantum electrodynamics in dielectric media which require a quantum description for the electromagnetic field interacting with matter fields. The associated quantum models can refer to macroscopic electromagnetic fields or, alternatively, to mesoscopic fields (polarization fields) describing an effective interaction between electromagnetic field and matter fields. We adopt the latter approach, and focus on the Hopfield model for the electromagnetic field in a dielectric dispersive medium in a framework in which space-time dependent mesoscopic parameters occur, like susceptibility, matter resonance frequency, and also coupling between electromagnetic field and polarization field. Our most direct goal is to describe in a phenomenological way a space-time varying dielectric perturbation induced by means of the Kerr effect in nonlinear dielectric media. This extension of the model is implemented by means of a Lorentz-invariant Lagrangian which, for constant microscopic parameters, and in the rest frame, coincides with the standard one. Moreover, we deduce a covariant scalar product and provide a canonical quantization scheme which takes into account the constraints implicit in the model. Examples of viable applications are indicated.

  8. Relativistically Covariant Many-Body Perturbation Procedure

    NASA Astrophysics Data System (ADS)

    Lindgren, Ingvar; Salomonson, Sten; Hedendahl, Daniel

    A covariant evolution operator (CEO) can be constructed, representing the time evolution of the relativistic wave unction or state vector. Like the nonrelativistic version, it contains (quasi-)singularities. The regular part is referred to as the Green’s operator (GO), which is the operator analogue of the Green’s function (GF). This operator, which is a field-theoretical concept, is closely related to the many-body wave operator and effective Hamiltonian, and it is the basic tool for our unified theory. The GO leads, when the perturbation is carried to all orders, to the Bethe-Salpeter equation (BSE) in the equal-time or effective-potential approximation. When relaxing the equal-time restriction, the procedure is fully compatible with the exact BSE. The calculations are performed in the photonic Fock space, where the number of photons is no longer constant. The procedure has been applied to helium-like ions, and the results agree well with S-matrix results in cases when comparison can be performed. In addition, evaluation of higher-order quantum-electrodynamical (QED) correlational effects has been performed, and the effects are found to be quite significant for light and medium-heavy ions.

  9. Holographic bound in covariant loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Tamaki, Takashi

    2016-07-01

    We investigate puncture statistics based on the covariant area spectrum in loop quantum gravity. First, we consider Maxwell-Boltzmann statistics with a Gibbs factor for punctures. We establish formulas which relate physical quantities such as horizon area to the parameter characterizing holographic degrees of freedom. We also perform numerical calculations and obtain consistency with these formulas. These results tell us that the holographic bound is satisfied in the large area limit and the correction term of the entropy-area law can be proportional to the logarithm of the horizon area. Second, we also consider Bose-Einstein statistics and show that the above formulas are also useful in this case. By applying the formulas, we can understand intrinsic features of Bose-Einstein condensate which corresponds to the case when the horizon area almost consists of punctures in the ground state. When this phenomena occurs, the area is approximately constant against the parameter characterizing the temperature. When this phenomena is broken, the area shows rapid increase which suggests the phase transition from quantum to classical area.

  10. Epigenetic Contribution to Covariance Between Relatives

    PubMed Central

    Tal, Omri; Kisdi, Eva; Jablonka, Eva

    2010-01-01

    Recent research has pointed to the ubiquity and abundance of between-generation epigenetic inheritance. This research has implications for assessing disease risk and the responses to ecological stresses and also for understanding evolutionary dynamics. An important step toward a general evaluation of these implications is the identification and estimation of the amount of heritable, epigenetic variation in populations. While methods for modeling the phenotypic heritable variance contributed by culture have already been developed, there are no comparable methods for nonbehavioral epigenetic inheritance systems. By introducing a model that takes epigenetic transmissibility (the probability of transmission of ancestral phenotypes) and environmental induction into account, we provide novel expressions for covariances between relatives. We have combined a classical quantitative genetics approach with information about the number of opportunities for epigenetic reset between generations and assumptions about environmental induction to estimate the heritable epigenetic variance and epigenetic transmissibility for both asexual and sexual populations. This assists us in the identification of phenotypes and populations in which epigenetic transmission occurs and enables a preliminary quantification of their transmissibility, which could then be followed by genomewide association and QTL studies. PMID:20100941

  11. Canonical quantization of Galilean covariant field theories

    NASA Astrophysics Data System (ADS)

    Santos, E. S.; de Montigny, M.; Khanna, F. C.

    2005-11-01

    The Galilean-invariant field theories are quantized by using the canonical method and the five-dimensional Lorentz-like covariant expressions of non-relativistic field equations. This method is motivated by the fact that the extended Galilei group in 3 + 1 dimensions is a subgroup of the inhomogeneous Lorentz group in 4 + 1 dimensions. First, we consider complex scalar fields, where the Schrödinger field follows from a reduction of the Klein-Gordon equation in the extended space. The underlying discrete symmetries are discussed, and we calculate the scattering cross-sections for the Coulomb interaction and for the self-interacting term λΦ4. Then, we turn to the Dirac equation, which, upon dimensional reduction, leads to the Lévy-Leblond equations. Like its relativistic analogue, the model allows for the existence of antiparticles. Scattering amplitudes and cross-sections are calculated for the Coulomb interaction, the electron-electron and the electron-positron scattering. These examples show that the so-called 'non-relativistic' approximations, obtained in low-velocity limits, must be treated with great care to be Galilei-invariant. The non-relativistic Proca field is discussed briefly.

  12. Generalized Covariant Gyrokinetic Dynamics of Magnetoplasmas

    SciTech Connect

    Cremaschini, C.; Tessarotto, M.; Nicolini, P.; Beklemishev, A.

    2008-12-31

    A basic prerequisite for the investigation of relativistic astrophysical magnetoplasmas, occurring typically in the vicinity of massive stellar objects (black holes, neutron stars, active galactic nuclei, etc.), is the accurate description of single-particle covariant dynamics, based on gyrokinetic theory (Beklemishev et al., 1999-2005). Provided radiation-reaction effects are negligible, this is usually based on the assumption that both the space-time metric and the EM fields (in particular the magnetic field) are suitably prescribed and are considered independent of single-particle dynamics, while allowing for the possible presence of gravitational/EM perturbations driven by plasma collective interactions which may naturally arise in such systems. The purpose of this work is the formulation of a generalized gyrokinetic theory based on the synchronous variational principle recently pointed out (Tessarotto et al., 2007) which permits to satisfy exactly the physical realizability condition for the four-velocity. The theory here developed includes the treatment of nonlinear perturbations (gravitational and/or EM) characterized locally, i.e., in the rest frame of a test particle, by short wavelength and high frequency. Basic feature of the approach is to ensure the validity of the theory both for large and vanishing parallel electric field. It is shown that the correct treatment of EM perturbations occurring in the presence of an intense background magnetic field generally implies the appearance of appropriate four-velocity corrections, which are essential for the description of single-particle gyrokinetic dynamics.

  13. Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins

    NASA Astrophysics Data System (ADS)

    Tolwinski-Ward, S. E.; Wang, D.

    2015-12-01

    Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.

  14. Action recognition from video using feature covariance matrices.

    PubMed

    Guo, Kai; Ishwar, Prakash; Konrad, Janusz

    2013-06-01

    We propose a general framework for fast and accurate recognition of actions in video using empirical covariance matrices of features. A dense set of spatio-temporal feature vectors are computed from video to provide a localized description of the action, and subsequently aggregated in an empirical covariance matrix to compactly represent the action. Two supervised learning methods for action recognition are developed using feature covariance matrices. Common to both methods is the transformation of the classification problem in the closed convex cone of covariance matrices into an equivalent problem in the vector space of symmetric matrices via the matrix logarithm. The first method applies nearest-neighbor classification using a suitable Riemannian metric for covariance matrices. The second method approximates the logarithm of a query covariance matrix by a sparse linear combination of the logarithms of training covariance matrices. The action label is then determined from the sparse coefficients. Both methods achieve state-of-the-art classification performance on several datasets, and are robust to action variability, viewpoint changes, and low object resolution. The proposed framework is conceptually simple and has low storage and computational requirements making it attractive for real-time implementation. PMID:23508265

  15. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  16. Non-Parametric Bayesian Human Motion Recognition Using a Single MEMS Tri-Axial Accelerometer

    PubMed Central

    Ahmed, M. Ejaz; Song, Ju Bin

    2012-01-01

    In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS) accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM) and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM) technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method. PMID:23201992

  17. Non-parametric Bayesian human motion recognition using a single MEMS tri-axial accelerometer.

    PubMed

    Ahmed, M Ejaz; Song, Ju Bin

    2012-01-01

    In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS) accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM) and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM) technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method. PMID:23201992

  18. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  19. A fresh look at the use of nonparametric analysis in actimetry.

    PubMed

    Gonçalves, B S B; Adamowicz, Taísa; Louzada, Fernando Mazzilli; Moreno, Claudia Roberta; Araujo, John Fontenele

    2015-04-01

    Actimetry has been used to estimate the sleep-wake cycle instead of the rest-activity rhythm. Although algorithms for assessing sleep from actimetry data exist, it is useful to analyze the rest-activity rhythm using nonparametric methods. This would then allow rest-activity rhythm stability, fragmentation and amplitude to be quantified. In addition, sleep and wakefulness efficiency can be quantified separately. These variables have been used in studies analyzing the effect of age, diseases and their respective treatments on human circadian rhythmicity. In this study, we carried out a comprehensive analysis of the main results from published articles and devised a functional model of interaction among the several components involved in generating the sleep-wake cycle. The nonparametric variables render it possible to infer the main characteristics of circadian rhythms, such as synchronization with a zeitgeber, and its amplitude and robustness. PMID:25065908

  20. Structural covariance networks in the mouse brain.

    PubMed

    Pagani, Marco; Bifone, Angelo; Gozzi, Alessandro

    2016-04-01

    The presence of networks of correlation between regional gray matter volume as measured across subjects in a group of individuals has been consistently described in several human studies, an approach termed structural covariance MRI (scMRI). Complementary to prevalent brain mapping modalities like functional and diffusion-weighted imaging, the approach can provide precious insights into the mutual influence of trophic and plastic processes in health and pathological states. To investigate whether analogous scMRI networks are present in lower mammal species amenable to genetic and experimental manipulation such as the laboratory mouse, we employed high resolution morphoanatomical MRI in a large cohort of genetically-homogeneous wild-type mice (C57Bl6/J) and mapped scMRI networks using a seed-based approach. We show that the mouse brain exhibits robust homotopic scMRI networks in both primary and associative cortices, a finding corroborated by independent component analyses of cortical volumes. Subcortical structures also showed highly symmetric inter-hemispheric correlations, with evidence of distributed antero-posterior networks in diencephalic regions of the thalamus and hypothalamus. Hierarchical cluster analysis revealed six identifiable clusters of cortical and sub-cortical regions corresponding to previously described neuroanatomical systems. Our work documents the presence of homotopic cortical and subcortical scMRI networks in the mouse brain, thus supporting the use of this species to investigate the elusive biological and neuroanatomical underpinnings of scMRI network development and its derangement in neuropathological states. The identification of scMRI networks in genetically homogeneous inbred mice is consistent with the emerging view of a key role of environmental factors in shaping these correlational networks. PMID:26802512

  1. Recurrence Analysis of Eddy Covariance Fluxes

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael

    2015-04-01

    The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.

  2. Covariant hyperbolization of force-free electrodynamics

    NASA Astrophysics Data System (ADS)

    Carrasco, F. L.; Reula, O. A.

    2016-04-01

    Force-free electrodynamics (FFE) is a nonlinear system of equations modeling the evolution of the electromagnetic field, in the presence of a magnetically dominated relativistic plasma. This configuration arises on several astrophysical scenarios which represent exciting laboratories to understand physics in extreme regimes. We show that this system, when restricted to the correct constraint submanifold, is symmetric hyperbolic. In numerical applications, it is not feasible to keep the system in that submanifold, and so it is necessary to analyze its structure first in the tangent space of that submanifold and then in a whole neighborhood of it. As has been shown [1], a direct (or naive) formulation of this system (in the whole tangent space) results in a weakly hyperbolic system of evolution equations for which well-posedness for the initial value formulation does not follow. Using the generalized symmetric hyperbolic formalism of Geroch [2], we introduce here a covariant hyperbolization for the FFE system. In fact, in analogy to the usual Maxwell case, a complete family of hyperbolizers is found, both for the restricted system on the constraint submanifold as well as for a suitably extended system defined in a whole neighborhood of it. A particular symmetrizer among the family is then used to write down the pertaining evolution equations, in a generic (3 +1 ) decomposition on a background spacetime. Interestingly, it turns out that for a particular choice of the lapse and shift functions of the foliation, our symmetrized system reduces to the one found in [1]. Finally, we analyze the characteristic structure of the resulting evolution system.

  3. Inflation in general covariant theory of gravity

    SciTech Connect

    Huang, Yongqing; Wang, Anzhong; Wu, Qiang E-mail: anzhong_wang@baylor.edu

    2012-10-01

    In this paper, we study inflation in the framework of the nonrelativistic general covariant theory of the Hořava-Lifshitz gravity with the projectability condition and an arbitrary coupling constant λ. We find that the Friedmann-Robterson-Walker (FRW) universe is necessarily flat in such a setup. We work out explicitly the linear perturbations of the flat FRW universe without specifying to a particular gauge, and find that the perturbations are different from those obtained in general relativity, because of the presence of the high-order spatial derivative terms. Applying the general formulas to a single scalar field, we show that in the sub-horizon regions, the metric and scalar field are tightly coupled and have the same oscillating frequencies. In the super-horizon regions, the perturbations become adiabatic, and the comoving curvature perturbation is constant. We also calculate the power spectra and indices of both the scalar and tensor perturbations, and express them explicitly in terms of the slow roll parameters and the coupling constants of the high-order spatial derivative terms. In particular, we find that the perturbations, of both scalar and tensor, are almost scale-invariant, and, with some reasonable assumptions on the coupling coefficients, the spectrum index of the tensor perturbation is the same as that given in the minimum scenario in general relativity (GR), whereas the index for scalar perturbation in general depends on λ and is different from the standard GR value. The ratio of the scalar and tensor power spectra depends on the high-order spatial derivative terms, and can be different from that of GR significantly.

  4. Precision adjustable stage

    DOEpatents

    Cutburth, Ronald W.; Silva, Leonard L.

    1988-01-01

    An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.

  5. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  6. The importance of covariance in nuclear data uncertainty propagation studies

    SciTech Connect

    Benstead, J.

    2012-07-01

    A study has been undertaken to investigate what proportion of the uncertainty propagated through plutonium critical assembly calculations is due to the covariances between the fission cross section in different neutron energy groups. The uncertainties on k{sub eff} calculated show that the presence of covariances between the cross section in different neutron energy groups accounts for approximately 27-37% of the propagated uncertainty due to the plutonium fission cross section. This study also confirmed the validity of employing the sandwich equation, with associated sensitivity and covariance data, instead of a Monte Carlo sampling approach to calculating uncertainties for linearly varying systems. (authors)

  7. Hawking radiation, covariant boundary conditions, and vacuum states

    SciTech Connect

    Banerjee, Rabin; Kulkarni, Shailesh

    2009-04-15

    The basic characteristics of the covariant chiral current and the covariant chiral energy-momentum tensor are obtained from a chiral effective action. These results are used to justify the covariant boundary condition used in recent approaches of computing the Hawking flux from chiral gauge and gravitational anomalies. We also discuss a connection of our results with the conventional calculation of nonchiral currents and stress tensors in different (Unruh, Hartle-Hawking and Boulware) states.

  8. Estimation of the covariance matrix of macroscopic quantum states

    NASA Astrophysics Data System (ADS)

    Ruppert, László; Usenko, Vladyslav C.; Filip, Radim

    2016-05-01

    For systems analogous to a linear harmonic oscillator, the simplest way to characterize the state is by a covariance matrix containing the symmetrically ordered moments of operators analogous to position and momentum. We show that using Stokes-like detectors without direct access to either position or momentum, the estimation of the covariance matrix of a macroscopic signal is still possible using interference with a classical noisy and low-intensity reference. Such a detection technique will allow one to estimate macroscopic quantum states of electromagnetic radiation without a coherent high-intensity local oscillator. It can be directly applied to estimate the covariance matrix of macroscopically bright squeezed states of light.

  9. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  10. Nonparametric simulation-based statistics for detecting linkage in general pedigrees

    SciTech Connect

    Davis, S.; Schroeder, M.; Weeks, D.E.; Goldin, L.R.

    1996-04-01

    We present here four nonparametric statistics for linkage analysis that test whether pairs of affected relatives share marker alleles more often than expected. These statistics are based on simulating the null distribution of a given statistic conditional on the unaffecteds` marker genotypes. Each statistic uses a different measure of marker sharing: the SimAPM statistic uses the simulation-based affected-pedigree-member measure based on identity-by-state (IBS) sharing. The SimKIN (kinship) measure is 1.0 for identity-by-descent (IBD) sharing, 0.0 for no IBD sharing, and the kinship coefficient when the IBD status is ambiguous. The simulation-based IBD (SimIBD) statistic uses a recursive algorithm to determine the probability of two affecteds sharing a specific allele IBD. The SimISO statistic is identical to SimIBD, except that it also measures marker similarity between unaffected pairs. We evaluated our statistics on data simulated under different two-locus disease models, comparing our results to those obtained with several other nonparametric statistics. Use of IBD information produces dramatic increases in power over the SimAPM method, which uses only IBS information. The power of our best statistic in most cases meets or exceeds the power of the other nonparametric statistics. Furthermore, our statistics perform comparisons between all affected relative pairs within general pedigrees and are not restricted to sib pairs or nuclear families. 32 refs., 5 figs., 6 tabs.

  11. Validation of drop plate technique for bacterial enumeration by parametric and nonparametric tests

    PubMed Central

    Naghili, Hossein; Tajik, Hossein; Mardani, Karim; Razavi Rouhani, Seyed Mehdi; Ehsani, Ali; Zare, Payman

    2013-01-01

    Drop plate technique has a priority and preference compared with the spread plate procedure, because of less time, quantity of media, effort requirement, little incubator space, and less labor intensive. The objective of this research was to compare the accuracy and fidelity of drop plate method vs. spread plate method by parametric and nonparametric statistical tests. For bacterial enumeration by drop and spread plate methods, successive dilutions of second subculture of Lactobacillus casei and Salmonella Typhimurium were transferred to selective agar. The correlation of agreement between both methods was evaluated by using statistical proofs. Results showed that mean value (parametric unpaired t-test) comparison at 95 percent confidence level did not reject null hypothesis, which it meant that the equality of the mean data could not be ruled out. Nonparametric method was used because of approximately Gaussian pattern of data distribution. For this purpose, Mann-Whitney test (equivalent nonparametric t-test) was used. It meant that the equality of medians obtained from two methods were similar. Spearman’s rho correlation coefficient (r) via both methods due to data distribution patterns for enumeration of S. Typhimurium and L. casei were 0.62 and 0.87, respectively; which represented moderately strong and strong relationship between two methods, respectively. Besides, there was a significant and strong positive correlation (p < 0.001) between spread and drop plate procedures. Because of aforementioned reasons, the spread plate method can be replaced by drop plate method. PMID:25653794

  12. LOCAL INDEPENDENCE FEATURE SCREENING FOR NONPARAMETRIC AND SEMIPARAMETRIC MODELS BY MARGINAL EMPIRICAL LIKELIHOOD

    PubMed Central

    Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao

    2015-01-01

    We consider an independence feature screening technique for identifying explanatory variables that locally contribute to the response variable in high-dimensional regression analysis. Without requiring a specific parametric form of the underlying data model, our approach accommodates a wide spectrum of nonparametric and semiparametric model families. To detect the local contributions of explanatory variables, our approach constructs empirical likelihood locally in conjunction with marginal nonparametric regressions. Since our approach actually requires no estimation, it is advantageous in scenarios such as the single-index models where even specification and identification of a marginal model is an issue. By automatically incorporating the level of variation of the nonparametric regression and directly assessing the strength of data evidence supporting local contribution from each explanatory variable, our approach provides a unique perspective for solving feature screening problems. Theoretical analysis shows that our approach can handle data dimensionality growing exponentially with the sample size. With extensive theoretical illustrations and numerical examples, we show that the local independence screening approach performs promisingly. PMID:27242388

  13. Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile

    NASA Astrophysics Data System (ADS)

    Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.

    2016-04-01

    Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.

  14. Randomization, matching, and propensity scores in the design and analysis of experimental studies with measured baseline covariates.

    PubMed

    Loux, Travis M

    2015-02-20

    In many experimental situations, researchers have information on a number of covariates prior to randomization. This information can be used to balance treatment assignment with respect to these covariates as well as in the analysis of the outcome data. In this paper, we investigate the use of propensity scores in both of these roles. We also introduce a randomization procedure in which the balance of all measured covariates is approximately indexed by the variance of the empirical propensity scores and randomization is restricted to those permutations with the least variable propensity scores. This procedure is compared with recently proposed methods in terms of resulting covariate balance and estimation efficiency. Properties of the estimators resulting from each procedure are compared with estimates which incorporate the propensity score in the analysis stage. Simulation results show that analytical adjustment for the propensity score yields results on par with those obtained through restricted randomization procedures and can be used in conjunction with such procedures to further improve inferential efficiency. PMID:25384851

  15. Progress of Covariance Evaluation at the China Nuclear Data Center

    SciTech Connect

    Xu, R.; Zhang, Q.; Zhang, Y.; Liu, T.; Ge, Z.; Lu, H.; Sun, Z.; Yu, B.; Tang, G.

    2015-01-15

    Covariance evaluations at the China Nuclear Data Center focus on the cross sections of structural materials and actinides in the fast neutron energy range. In addition to the well-known Least-squares approach, a method based on the analysis of the sources of experimental uncertainties is especially introduced to generate a covariance matrix for a particular reaction for which multiple measurements are available. The scheme of the covariance evaluation flow is presented, and an example of n+{sup 90}Zr is given to illuminate the whole procedure. It is proven that the accuracy of measurements can be properly incorporated into the covariance and the long-standing small uncertainty problem can be avoided.

  16. Covariance Matrix Evaluations for Independent Mass Fission Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.; Sumini, M.

    2015-01-01

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of 235U(nth, f) and 239Pu(nth, f) reactions.

  17. Covariance Matrix Evaluations for Independent Mass Fission Yields

    SciTech Connect

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.

  18. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  19. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  20. Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices

    PubMed Central

    Cai, Tony; Ma, Zongming; Wu, Yihong

    2014-01-01

    This paper considers a sparse spiked covariancematrix model in the high-dimensional setting and studies the minimax estimation of the covariance matrix and the principal subspace as well as the minimax rank detection. The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariance matrices. We also establish the minimax rate under the spectral norm for estimating the principal subspace, the primary object of interest in principal component analysis. In addition, the optimal rate for the rank detection boundary is obtained. This result also resolves the gap in a recent paper by Berthet and Rigollet [2] where the special case of rank one is considered. PMID:26257453

  1. Nonlinear effects in the correlation of tracks and covariance propagation

    NASA Astrophysics Data System (ADS)

    Sabol, C.; Hill, K.; Alfriend, K.; Sukut, T.

    2013-03-01

    Even though there are methods for the nonlinear propagation of the covariance the propagation of the covariance in current operational programs is based on the state transition matrix of the 1st variational equations, thus it is a linear propagation. If the measurement errors are zero mean Gaussian, the orbit errors, statistically represented by the covariance, are Gaussian. When the orbit errors become too large they are no longer Gaussian and not represented by the covariance. One use of the covariance is the association of uncorrelated tracks (UCTs). A UCT is an object tracked by a space surveillance system that does not correlate to another object in the space object data base. For an object to be entered into the data base three or more tracks must be correlated. Associating UCTs is a major challenge for a space surveillance system since every object entered into the space object catalog begins as a UCT. It has been proved that if the orbit errors are Gaussian, the error ellipsoid represented by the covariance is the optimum association volume. When the time between tracks becomes large, hours or even days, the orbit errors can become large and are no longer Gaussian, and this has a negative effect on the association of UCTs. This paper further investigates the nonlinear effects on the accuracy of the covariance for use in correlation. The use of the best coordinate system and the unscented Kalman Filter (UKF) for providing a more accurate covariance are investigated along with assessing how these approaches would result in the ability to correlate tracks that are further separated in time.

  2. Are the invariance principles really truly Lorentz covariant?

    SciTech Connect

    Arunasalam, V.

    1994-02-01

    It is shown that some sections of the invariance (or symmetry) principles such as the space reversal symmetry (or parity P) and time reversal symmetry T (of elementary particle and condensed matter physics, etc.) are not really truly Lorentz covariant. Indeed, I find that the Dirac-Wigner sense of Lorentz invariance is not in full compliance with the Einstein-Minkowski reguirements of the Lorentz covariance of all physical laws (i.e., the world space Mach principle).

  3. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  4. Covariance fitting of highly-correlated data in lattice QCD

    NASA Astrophysics Data System (ADS)

    Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong

    2013-07-01

    We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.

  5. Gaussian covariance matrices for anisotropic galaxy clustering measurements

    NASA Astrophysics Data System (ADS)

    Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio

    2016-04-01

    Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.

  6. New capabilities for processing covariance data in resonance region

    SciTech Connect

    Wiarda, D.; Dunn, M. E.; Greene, N. M.; Larson, N. M.; Leal, L. C.

    2006-07-01

    The AMPX [1] code system is a modular system of FORTRAN computer programs that relate to nuclear analysis with a primary emphasis on tasks associated with the production and use of multi group and continuous energy cross sections. The module PUFF-III within this code system handles the creation of multi group covariance data from ENDF information. The resulting covariances are saved in COVERX format [2]. We recently expanded the capabilities of PUFF-III to include full handling of covariance data in the resonance region (resolved as well as unresolved). The new program handles all resonance covariance formats in File 32 except for the long-range covariance sub sections. The new program has been named PUFF-IV. To our knowledge, PUFF-IV is the first processing code that can address both the new ENDF format for resolved resonance parameters and the new ENDF 'compact' covariance format. The existing code base was rewritten in Fortran 90 to allow for a more modular design. Results are identical between the new and old versions within rounding errors, where applicable. Automatic test cases have been added to ensure that consistent results are generated across computer systems. (authors)

  7. Summary of the Workshop on Neutron Cross Section Covariances

    SciTech Connect

    Smith, Donald L.

    2008-12-15

    A Workshop on Neutron Cross Section Covariances was held from June 24-27, 2008, in Port Jefferson, New York. This Workshop was organized by the National Nuclear Data Center, Brookhaven National Laboratory, to provide a forum for reporting on the status of the growing field of neutron cross section covariances for applications and for discussing future directions of the work in this field. The Workshop focused on the following four major topical areas: covariance methodology, recent covariance evaluations, covariance applications, and user perspectives. Attention was given to the entire spectrum of neutron cross section covariance concerns ranging from light nuclei to the actinides, and from the thermal energy region to 20 MeV. The papers presented at this conference explored topics ranging from fundamental nuclear physics concerns to very specific applications in advanced reactor design and nuclear criticality safety. This paper provides a summary of this workshop. Brief comments on the highlights of each Workshop contribution are provided. In addition, a perspective on the achievements and shortcomings of the Workshop as well as on the future direction of research in this field is offered.

  8. The Performance Analysis Based on SAR Sample Covariance Matrix

    PubMed Central

    Erten, Esra

    2012-01-01

    Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976

  9. Continuously adjustable Pulfrich spectacles

    NASA Astrophysics Data System (ADS)

    Jacobs, Ken; Karpf, Ron

    2011-03-01

    A number of Pulfrich 3-D movies and TV shows have been produced, but the standard implementation has inherent drawbacks. The movie and TV industries have correctly concluded that the standard Pulfrich 3-D implementation is not a useful 3-D technique. Continuously Adjustable Pulfrich Spectacles (CAPS) is a new implementation of the Pulfrich effect that allows any scene containing movement in a standard 2-D movie, which are most scenes, to be optionally viewed in 3-D using inexpensive viewing specs. Recent scientific results in the fields of human perception, optoelectronics, video compression and video format conversion are translated into a new implementation of Pulfrich 3- D. CAPS uses these results to continuously adjust to the movie so that the viewing spectacles always conform to the optical density that optimizes the Pulfrich stereoscopic illusion. CAPS instantly provides 3-D immersion to any moving scene in any 2-D movie. Without the glasses, the movie will appear as a normal 2-D image. CAPS work on any viewing device, and with any distribution medium. CAPS is appropriate for viewing Internet streamed movies in 3-D.

  10. Subsea adjustable choke valves

    SciTech Connect

    Cyvas, M.K. )

    1989-08-01

    With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.

  11. Co-variables in first trimester maternal serum screening.

    PubMed

    de Graaf, I M; Cuckle, H S; Pajkrt, E; Leschot, N J; Bleker, O P; van Lith, J M

    2000-03-01

    The objective of this study was to determined the influence of maternal weight, maternal smoking habits, gravidity, parity and fetal gender on the level of maternal serum marker used in first trimester screening for Down syndrome. A total of 2449 singleton unaffected pregnancies from two centres were studied. Maternal serum free beta-human chorionic gonadotrophin (hCG) and alpha-fetoprotein (AFP) concentrations had been measured in all pregnancies, and pregnancy associated plasma protein (PAPP)-A levels had been measured in 924. All results were expressed as multiples of the gestation specific median (MoM) values after regression, using each centre's own medians. Information on maternal weight was available in 2259 pregnancies, on self-reported current cigarette smoking in 1364 (of whom 117 (8.6%) were smokers), on gravidity in 1371, parity in 1303 and fetal gender in 253. All three markers showed a statistically significant negative association with maternal weight (p<0.0005) and in the subsequent analyses MoM values were weight adjusted using standard methods. The median PAPP-A level in smokers was 0.81 MoM, a significant reduction (p<0.005); free beta-hCG was also reduced (median 0.89 MoM) but not significantly (p=0.17), and AFP was unaltered. The median AFP level in primagravidas was highly significantly greater than that in gravid women (p<0.0005). In PAPP-A the reverse effect was seen but it did not reach statistical significance (p=0.15) and there was no effect for free beta-hCG. Results of a similar magnitude and direction were found for parity. The median level of free beta-hCG was higher (p=0.0005), and the median AFP lower in female pregnancies. Maternal weight and, for PAPP-A, maternal smoking are important first trimester screening co-variables. Gravidity, parity and fetal gender also seem to influence one or more first trimester markers. PMID:10719318

  12. Individual and Contextual Effects of School Adjustment on Adolescent Alcohol Use

    PubMed Central

    Stanley, Linda R.; Edwards, Ruth W.; Harkabus, Lindsey C.; Chapin, Laurie A.

    2010-01-01

    This paper examines the effect of a student’s own school adjustment as well as the contextual level of school adjustment (the normative level of school adjustment among students in a school) on student's self-reported use of alcohol. Using a dataset of 43,465 male and female 8th grade students from 349 schools across the contiguous United States who participated in a national study of substance use in rural communities between 1996 and 2000, multilevel latent covariate models were utilized to disentangle the individual-level and contextual effects of three school adjustment variables (i.e., school bonding, behavior at school, and friend’s school bonding) on alcohol use. All three school adjustment factors were significant predictors of alcohol use both within and between schools. Furthermore, this study demonstrated a strong contextual effect; students who attended schools where the overall level of school adjustment was higher reported lower levels of alcohol use even after taking their own school adjustment into account. The results demonstrate the importance of both a student’s own level of school adjustment and the normative level of school adjustment among students in the school on an adolescent’s use of alcohol. Differences in school adjustment across schools were quite strongly related to an adolescent's own alcohol use, indicating that school adjustment is an important aspect of school climate. Initiatives aimed at improving school climate may have beneficial effects on students’ alcohol use. PMID:19242802

  13. 77 FR 40387 - Price Adjustment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ... Price Adjustment AGENCY: Postal Regulatory Commission. ACTION: Notice. SUMMARY: The Commission is noticing a recently filed Postal Service request to adjust prices for several market dominant products... announcing its intent to adjust prices for several market dominant products within First-Class Mail...

  14. A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates

    PubMed Central

    Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne

    2013-01-01

    The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implemented in an R package, SpatioTemporal, available on CRAN. The model is used by the EPA funded Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) to produce estimates of ambient air pollution; MESA Air uses the estimates to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. In this paper we use the model to predict long-term average concentrations of NOx in the Los Angeles area during a ten year period. Predictions are based on measurements from the EPA Air Quality System, MESA Air specific monitoring, and output from a source dispersion model for traffic related air pollution (Caline3QHCR). Accuracy in predicting long-term average concentrations is evaluated using an elaborate cross-validation setup that accounts for a sparse spatio-temporal sampling pattern in the data, and adjusts for temporal effects. The predictive ability of the model is good with cross-validated R2 of approximately 0.7 at subject sites. Replacing four geographic covariate indicators of traffic density with the Caline3QHCR dispersion model output resulted in very similar prediction accuracy from a more parsimonious and more interpretable model. Adding traffic-related geographic covariates to the model that included Caline3QHCR did not further improve the prediction accuracy. PMID:25264424

  15. A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates.

    PubMed

    Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne

    2014-09-01

    The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implemented in an R package, SpatioTemporal, available on CRAN. The model is used by the EPA funded Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) to produce estimates of ambient air pollution; MESA Air uses the estimates to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. In this paper we use the model to predict long-term average concentrations of NOx in the Los Angeles area during a ten year period. Predictions are based on measurements from the EPA Air Quality System, MESA Air specific monitoring, and output from a source dispersion model for traffic related air pollution (Caline3QHCR). Accuracy in predicting long-term average concentrations is evaluated using an elaborate cross-validation setup that accounts for a sparse spatio-temporal sampling pattern in the data, and adjusts for temporal effects. The predictive ability of the model is good with cross-validated R (2) of approximately 0.7 at subject sites. Replacing four geographic covariate indicators of traffic density with the Caline3QHCR dispersion model output resulted in very similar prediction accuracy from a more parsimonious and more interpretable model. Adding traffic-related geographic covariates to the model that included Caline3QHCR did not further improve the prediction accuracy. PMID:25264424

  16. Differences in structural covariance brain networks between behavioral variant frontotemporal dementia and Alzheimer's disease.

    PubMed

    Hafkemeijer, Anne; Möller, Christiane; Dopper, Elise G P; Jiskoot, Lize C; van den Berg-Huysmans, Annette A; van Swieten, John C; van der Flier, Wiesje M; Vrenken, Hugo; Pijnenburg, Yolande A L; Barkhof, Frederik; Scheltens, Philip; van der Grond, Jeroen; Rombouts, Serge A R B

    2016-03-01

    Disease-specific patterns of gray matter atrophy in Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) overlap with distinct structural covariance networks (SCNs) in cognitively healthy controls. This suggests that both types of dementia target specific structural networks. Here, we study SCNs in AD and bvFTD. We used structural magnetic resonance imaging data of 31 AD patients, 24 bvFTD patients, and 30 controls from two centers specialized in dementia. Ten SCNs were defined based on structural covariance of gray matter density using independent component analysis. We studied group differences in SCNs using F-tests, with Bonferroni corrected t-tests, adjusted for age, gender, and study center. Associations with cognitive performance were studied using linear regression analyses. Cross-sectional group differences were found in three SCNs (all P < 0.0025). In bvFTD, we observed decreased anterior cingulate network integrity compared with AD and controls. Patients with AD showed decreased precuneal network integrity compared with bvFTD and controls, and decreased hippocampal network and anterior cingulate network integrity compared with controls. In AD, we found an association between precuneal network integrity and global cognitive performance (P = 0.0043). Our findings show that AD and bvFTD target different SCNs. The comparison of both types of dementia showed decreased precuneal (i.e., default mode) network integrity in AD and decreased anterior cingulate (i.e., salience) network integrity in bvFTD. This confirms the hypothesis that AD and bvFTD have distinct anatomical networks of degeneration and shows that structural covariance gives valuable insights in the understanding of network pathology in dementia. PMID:26660857

  17. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. PMID:23379851

  18. Covariance matching based adaptive unscented Kalman filter for direct filtering in INS/GNSS integration

    NASA Astrophysics Data System (ADS)

    Meng, Yang; Gao, Shesheng; Zhong, Yongmin; Hu, Gaoge; Subic, Aleksandar

    2016-03-01

    The use of the direct filtering approach for INS/GNSS integrated navigation introduces nonlinearity into the system state equation. As the unscented Kalman filter (UKF) is a promising method for nonlinear problems, an obvious solution is to incorporate the UKF concept in the direct filtering approach to address the nonlinearity involved in INS/GNSS integrated navigation. However, the performance of the standard UKF is dependent on the accurate statistical characterizations of system noise. If the noise distributions of inertial instruments and GNSS receivers are not appropriately described, the standard UKF will produce deteriorated or even divergent navigation solutions. This paper presents an adaptive UKF with noise statistic estimator to overcome the limitation of the standard UKF. According to the covariance matching technique, the innovation and residual sequences are used to determine the covariance matrices of the process and measurement noises. The proposed algorithm can estimate and adjust the system noise statistics online, and thus enhance the adaptive capability of the standard UKF. Simulation and experimental results demonstrate that the performance of the proposed algorithm is significantly superior to that of the standard UKF and adaptive-robust UKF under the condition without accurate knowledge on system noise, leading to improved navigation precision.

  19. SimSeq: a nonparametric approach to simulation of RNA-sequence datasets

    PubMed Central

    Benidt, Sam; Nettleton, Dan

    2015-01-01

    Motivation: RNA sequencing analysis methods are often derived by relying on hypothetical parametric models for read counts that are not likely to be precisely satisfied in practice. Methods are often tested by analyzing data that have been simulated according to the assumed model. This testing strategy can result in an overly optimistic view of the performance of an RNA-seq analysis method. Results: We develop a data-based simulation algorithm for RNA-seq data. The vector of read counts simulated for a given experimental unit has a joint distribution that closely matches the distribution of a source RNA-seq dataset provided by the user. We conduct simulation experiments based on the negative binomial distribution and our proposed nonparametric simulation algorithm. We compare performance between the two simulation experiments over a small subset of statistical methods for RNA-seq analysis available in the literature. We use as a benchmark the ability of a method to control the false discovery rate. Not surprisingly, methods based on parametric modeling assumptions seem to perform better with respect to false discovery rate control when data are simulated from parametric models rather than using our more realistic nonparametric simulation strategy. Availability and implementation: The nonparametric simulation algorithm developed in this article is implemented in the R package SimSeq, which is freely available under the GNU General Public License (version 2 or later) from the Comprehensive R Archive Network (http://cran.rproject.org/). Contact: sgbenidt@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25725090

  20. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment

    PubMed Central

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration

  1. Non-Parametric Change-Point Method for Differential Gene Expression Detection

    PubMed Central

    Wang, Yao; Wu, Chunguo; Ji, Zhaohua; Wang, Binghong; Liang, Yanchun

    2011-01-01

    Background We proposed a non-parametric method, named Non-Parametric Change Point Statistic (NPCPS for short), by using a single equation for detecting differential gene expression (DGE) in microarray data. NPCPS is based on the change point theory to provide effective DGE detecting ability. Methodology NPCPS used the data distribution of the normal samples as input, and detects DGE in the cancer samples by locating the change point of gene expression profile. An estimate of the change point position generated by NPCPS enables the identification of the samples containing DGE. Monte Carlo simulation and ROC study were applied to examine the detecting accuracy of NPCPS, and the experiment on real microarray data of breast cancer was carried out to compare NPCPS with other methods. Conclusions Simulation study indicated that NPCPS was more effective for detecting DGE in cancer subset compared with five parametric methods and one non-parametric method. When there were more than 8 cancer samples containing DGE, the type I error of NPCPS was below 0.01. Experiment results showed both good accuracy and reliability of NPCPS. Out of the 30 top genes ranked by using NPCPS, 16 genes were reported as relevant to cancer. Correlations between the detecting result of NPCPS and the compared methods were less than 0.05, while between the other methods the values were from 0.20 to 0.84. This indicates that NPCPS is working on different features and thus provides DGE identification from a distinct perspective comparing with the other mean or median based methods. PMID:21655325

  2. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  3. Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting.

    PubMed

    Joshi, Neel; Matusik, Wojciech; Avidan, Shai; Pfister, Hanspeter; Freeman, William T

    2007-01-01

    Defocus matting is a fully automatic and passive method for pulling mattes from video captured with coaxial cameras that have different depths of field and planes of focus. Nonparametric sampling can accelerate the video-matting process from minutes to seconds per frame. In addition a super-resolution technique efficiently bridges the gap between mattes from high-resolution video cameras and those from low-resolution cameras. Off-center matting pulls mattes for an external high-resolution camera that doesn't share the same center of projection as the low-resolution cameras used to capture the defocus matting data. PMID:17388202

  4. Non-parametric trend analysis of water quality data of rivers in Kansas

    NASA Astrophysics Data System (ADS)

    Yu, Yun-Sheng; Zou, Shimin; Whittemore, Donald

    1993-09-01

    Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous.

  5. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  6. PROFILE CONTROL CHARTS BASED ON NONPARAMETRIC L-1 REGRESSION METHODS1

    PubMed Central

    Wei, Ying; Zhao, Zhibiao; Lin, Dennis K. J.

    2012-01-01

    Classical statistical process control often relies on univariate characteristics. In many contemporary applications, however, the quality of products must be characterized by some functional relation between a response variable and its explanatory variables. Monitoring such functional profiles has been a rapidly growing field due to increasing demands. This paper develops a novel nonparametric L-1 location-scale model to screen the shapes of profiles. The model is built on three basic elements: location shifts, local shape distortions, and overall shape deviations, which are quantified by three individual metrics. The proposed approach is applied to the previously analyzed vertical density profile data, leading to some interesting insights. PMID:23539524

  7. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  8. Factors associated with malnutrition among tribal children in India: a non-parametric approach.

    PubMed

    Debnath, Avijit; Bhattacharjee, Nairita

    2014-06-01

    The purpose of this study is to identify the determinants of malnutrition among the tribal children in India. The investigation is based on secondary data compiled from the National Family Health Survey-3. We used a classification and regression tree model, a non-parametric approach, to address the objective. Our analysis shows that breastfeeding practice, economic status, antenatal care of mother and women's decision-making autonomy are negatively associated with malnutrition among tribal children. We identify maternal malnutrition and urban concentration of household as the two risk factors for child malnutrition. The identified associated factors may be used for designing and targeting preventive programmes for malnourished tribal children. PMID:24415743

  9. Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data

    NASA Astrophysics Data System (ADS)

    Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.

    2011-09-01

    Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.

  10. Within-person Covariation of Agentic and Communal Perceptions: Implications for Interpersonal Theory and Assessment

    PubMed Central

    Roche, Michael J.; Pincus, Aaron L.; Hyde, Amanda L.; Conroy, David E.; Ram, Nilam

    2013-01-01

    Interpersonal theory identifies agency and communion as uncorrelated (orthogonal) dimensions, largely evidenced by research examining between-person analyses of single-occasion measures. However, longitudinal studies of interpersonal behavior demonstrated the within-person association of agency and communion is not orthogonal for many individuals, and between-person differences in these associations relate to adjustment. We applied a similar approach to investigate the association of interpersonal perceptions. 184 university students completed a 7-day event-contingent study of their interpersonal experiences. Using multilevel regression models, we demonstrated that agentic and communal perceptions were positively associated, and the strength of this within-person association was moderated by between-person scores of dependency and narcissism. We discuss the benefits of incorporating within-person interpersonal associations (termed interpersonal covariation) into interpersonal theory and assessment. PMID:24072945

  11. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  12. Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Xu, M.; Paul, M. R.

    2016-06-01

    We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20 ≲Dλ≲50 , and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization.

  13. Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection.

    PubMed

    Xu, M; Paul, M R

    2016-06-01

    We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20≲D_{λ}≲50, and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization. PMID:27415256

  14. Manifestly covariant Jüttner distribution and equipartition theorem

    NASA Astrophysics Data System (ADS)

    Chacón-Acosta, Guillermo; Dagdug, Leonardo; Morales-Técotl, Hugo A.

    2010-02-01

    The relativistic equilibrium velocity distribution plays a key role in describing several high-energy and astrophysical effects. Recently, computer simulations favored Jüttner’s as the relativistic generalization of Maxwell’s distribution for d=1,2,3 spatial dimensions and pointed to an invariant temperature. In this work, we argue an invariant temperature naturally follows from manifest covariance. We present a derivation of the manifestly covariant Jüttner’s distribution and equipartition theorem. The standard procedure to get the equilibrium distribution as a solution of the relativistic Boltzmann’s equation, which holds for dilute gases, is here adopted. However, contrary to previous analysis, we use Cartesian coordinates in d+1 momentum space, with d spatial components. The use of the multiplication theorem of Bessel functions turns crucial to regain the known invariant form of Jüttner’s distribution. Since equilibrium kinetic-theory results should agree with thermodynamics in the comoving frame to the gas the covariant pseudonorm of a vector entering the distribution can be identified with the reciprocal of temperature in such comoving frame. Then by combining the covariant statistical moments of Jüttner’s distribution a form of the equipartition theorem is advanced which also accommodates the invariant comoving temperature and it contains, as a particular case, a previous not manifestly covariant form.

  15. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGESBeta

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; Forget, Benoit

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  16. Delay Adjusted Incidence Infographic

    Cancer.gov

    This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.

  17. Experience in using the covariances of some ENDF/B-V dosimetry cross sections: proposed improvements and addition of cross-reaction covariances

    SciTech Connect

    Fu, C.Y.; Hetrick, D.M.

    1982-01-01

    Recent ratio data, with carefully evaluated covariances, were combined with eleven of the ENDF/B-V dosimetry cross sections using the generalized least-squares method. The purpose was to improve these evaluated cross sections and covariances, as well as to generate values for the cross-reaction covariances. The results represent improved cross sections as well as realistic and usable covariances. The latter are necessary for meaningful intergral-differential comparisons and for spectrum unfolding.

  18. High-precision covariant one-boson-exchange potentials for np scattering below 350 MeV

    SciTech Connect

    Franz Gross; Alfred Stadler

    2007-12-01

    All realistic potential models for the two-nucleon interaction are to some extent based on boson exchange. However, in order to achieve an essentially perfect fit to the scattering data, characterized by a chi2/Ndata~ 1, previous potentials have abandoned a pure one boson-exchange mechanism (OBE). Using a covariant theory, we have found a true OBE potential that fits the 2006 world np data below 350 MeV with a chi2/Ndata = 1.00 for 3612 data. Our potential has fewer adjustable parameters than previous high precision potentials, and also reproduces the experimental triton binding energy without introducing additional irreducible three-nucleon forces.

  19. Addressing spectroscopic quality of covariant density functional theory

    NASA Astrophysics Data System (ADS)

    Afanasjev, A. V.

    2015-03-01

    The spectroscopic quality of covariant density functional theory has been accessed by analyzing the accuracy and theoretical uncertainties in the description of spectroscopic observables. Such analysis is first presented for the energies of the single-particle states in spherical and deformed nuclei. It is also shown that the inclusion of particle-vibration coupling improves the description of the energies of predominantly single-particle states in medium and heavy-mass spherical nuclei. However, the remaining differences between theory and experiment clearly indicate missing physics and missing terms in covariant energy density functionals. The uncertainties in the predictions of the position of two-neutron drip line sensitively depend on the uncertainties in the prediction of the energies of the single-particle states. On the other hand, many spectroscopic observables in well deformed nuclei at ground state and finite spin only weakly depend on the choice of covariant energy density functional.

  20. Effects of Nicotine Deprivation on Craving Response Covariation in Smokers

    PubMed Central

    Sayette, Michael A.; Martin, Christopher S.; Hull, Jay G.; Wertz, Joan M.; Perrott, Michael A.

    2009-01-01

    Most models of craving propose that when cravings are strong, diverse responses—thought to index an underlying craving state— covary. Previous studies provided weak support for this hypothesis. The authors tested whether nicotine deprivation affects degree of covariation across multiple measures related to craving. Heavy and light smokers (N = 127) were exposed to smoking cues while either nicotine deprived or nondeprived. Measures included urge ratings, affective valence, a behavioral choice task assessing perceived reinforcement value of smoking, and smoking-related judgment tasks. Results indicated higher correlations in the nicotine-deprived than in nondeprived group. The measures principally responsible for this effect loaded onto a single common Craving factor for nicotine-deprived but not nondeprived smokers. These findings suggest that, under certain conditions, measures of craving-related processes covary. PMID:12653419