Science.gov

Sample records for missing covariate information

  1. Frailty models with missing covariates.

    PubMed

    Herring, Amy H; Ibrahim, Joseph G; Lipsitz, Stuart R

    2002-03-01

    We present a method for estimating the parameters in random effects models for survival data when covariates are subject to missingness. Our method is more general than the usual frailty model as it accommodates a wide range of distributions for the random effects, which are included as an offset in the linear predictor in a manner analogous to that used in generalized linear mixed models. We propose using a Monte Carlo EM algorithm along with the Gibbs sampler to obtain parameter estimates. This method is useful in reducing the bias that may be incurred using complete-case methods in this setting. The methodology is applied to data from Eastern Cooperative Oncology Group melanoma clinical trials in which observations were believed to be clustered and several tumor characteristics were not always observed.

  2. Doubly robust estimates for binary longitudinal data analysis with missing response and missing covariates.

    PubMed

    Chen, Baojiang; Zhou, Xiao-Hua

    2011-09-01

    Longitudinal studies often feature incomplete response and covariate data. Likelihood-based methods such as the expectation-maximization algorithm give consistent estimators for model parameters when data are missing at random (MAR) provided that the response model and the missing covariate model are correctly specified; however, we do not need to specify the missing data mechanism. An alternative method is the weighted estimating equation, which gives consistent estimators if the missing data and response models are correctly specified; however, we do not need to specify the distribution of the covariates that have missing values. In this article, we develop a doubly robust estimation method for longitudinal data with missing response and missing covariate when data are MAR. This method is appealing in that it can provide consistent estimators if either the missing data model or the missing covariate model is correctly specified. Simulation studies demonstrate that this method performs well in a variety of situations.

  3. On analyzing ordinal data when responses and covariates are both missing at random.

    PubMed

    Rana, Subrata; Roy, Surupa; Das, Kalyan

    2016-08-01

    In many occasions, particularly in biomedical studies, data are unavailable for some responses and covariates. This leads to biased inference in the analysis when a substantial proportion of responses or a covariate or both are missing. Except a few situations, methods for missing data have earlier been considered either for missing response or for missing covariates, but comparatively little attention has been directed to account for both missing responses and missing covariates, which is partly attributable to complexity in modeling and computation. This seems to be important as the precise impact of substantial missing data depends on the association between two missing data processes as well. The real difficulty arises when the responses are ordinal by nature. We develop a joint model to take into account simultaneously the association between the ordinal response variable and covariates and also that between the missing data indicators. Such a complex model has been analyzed here by using the Markov chain Monte Carlo approach and also by the Monte Carlo relative likelihood approach. Their performance on estimating the model parameters in finite samples have been looked into. We illustrate the application of these two methods using data from an orthodontic study. Analysis of such data provides some interesting information on human habit.

  4. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  5. Semiparametric approach for non-monotone missing covariates in a parametric regression model.

    PubMed

    Sinha, Samiran; Saha, Krishna K; Wang, Suojin

    2014-06-01

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  6. Handling Missing Covariates in Conditional Mixture Models Under Missing at Random Assumptions.

    PubMed

    Sterba, Sonya K

    2014-01-01

    Mixture modeling is a popular method that accounts for unobserved population heterogeneity using multiple latent classes that differ in response patterns. Psychologists use conditional mixture models to incorporate covariates into between-class and/or within-class regressions. Although psychologists often have missing covariate data, conditional mixtures are currently fit with a conditional likelihood, treating covariates as fixed and fully observed. Under this exogenous-x approach, missing covariates are handled primarily via listwise deletion. This sacrifices efficiency and does not allow missingness to depend on observed outcomes. Here we describe a modified joint likelihood approach that (a) allows inference about parameters of the exogenous-x conditional mixture even with nonnormal covariates, unlike a conventional multivariate mixture; (b) retains all cases under missing at random assumptions; (c) yields lower bias and higher efficiency than the exogenous-x approach under a variety of conditions with missing covariates; and (d) is straightforward to implement in available commercial software. The proposed approach is illustrated with an empirical analysis predicting membership in latent classes of conduct problems. Recommendations for practice are discussed.

  7. Multiple imputation of missing covariates in NONMEM and evaluation of the method's sensitivity to η-shrinkage.

    PubMed

    Johansson, Åsa M; Karlsson, Mats O

    2013-10-01

    Multiple imputation (MI) is an approach widely used in statistical analysis of incomplete data. However, its application to missing data problems in nonlinear mixed-effects modelling is limited. The objective was to implement a four-step MI method for handling missing covariate data in NONMEM and to evaluate the method's sensitivity to η-shrinkage. Four steps were needed; (1) estimation of empirical Bayes estimates (EBEs) using a base model without the partly missing covariate, (2) a regression model for the covariate values given the EBEs from subjects with covariate information, (3) imputation of covariates using the regression model and (4) estimation of the population model. Steps (3) and (4) were repeated several times. The procedure was automated in PsN and is now available as the mimp functionality ( http://psn.sourceforge.net/ ). The method's sensitivity to shrinkage in EBEs was evaluated in a simulation study where the covariate was missing according to a missing at random type of missing data mechanism. The η-shrinkage was increased in steps from 4.5 to 54%. Two hundred datasets were simulated and analysed for each scenario. When shrinkage was low the MI method gave unbiased and precise estimates of all population parameters. With increased shrinkage the estimates became less precise but remained unbiased.

  8. Multiple imputation for IPD meta-analysis: allowing for heterogeneity and studies with missing covariates.

    PubMed

    Quartagno, M; Carpenter, J R

    2016-07-30

    Recently, multiple imputation has been proposed as a tool for individual patient data meta-analysis with sporadically missing observations, and it has been suggested that within-study imputation is usually preferable. However, such within study imputation cannot handle variables that are completely missing within studies. Further, if some of the contributing studies are relatively small, it may be appropriate to share information across studies when imputing. In this paper, we develop and evaluate a joint modelling approach to multiple imputation of individual patient data in meta-analysis, with an across-study probability distribution for the study specific covariance matrices. This retains the flexibility to allow for between-study heterogeneity when imputing while allowing (i) sharing information on the covariance matrix across studies when this is appropriate, and (ii) imputing variables that are wholly missing from studies. Simulation results show both equivalent performance to the within-study imputation approach where this is valid, and good results in more general, practically relevant, scenarios with studies of very different sizes, non-negligible between-study heterogeneity and wholly missing variables. We illustrate our approach using data from an individual patient data meta-analysis of hypertension trials. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. Comparison of Two Approaches for Handling Missing Covariates in Logistic Regression

    ERIC Educational Resources Information Center

    Peng, Chao-Ying Joanne; Zhu, Jin

    2008-01-01

    For the past 25 years, methodological advances have been made in missing data treatment. Most published work has focused on missing data in dependent variables under various conditions. The present study seeks to fill the void by comparing two approaches for handling missing data in categorical covariates in logistic regression: the…

  10. A New Approach to Handle Missing Covariate Data in Twin Research : With an Application to Educational Achievement Data.

    PubMed

    Schwabe, Inga; Boomsma, Dorret I; Zeeuw, Eveline L de; Berg, Stéphanie M van den

    2016-07-01

    The often-used ACE model which decomposes phenotypic variance into additive genetic (A), common-environmental (C) and unique-environmental (E) parts can be extended to include covariates. Collection of these variables however often leads to a large amount of missing data, for example when self-reports (e.g. questionnaires) are not fully completed. The usual approach to handle missing covariate data in twin research results in reduced power to detect statistical effects, as only phenotypic and covariate data of individual twins with complete data can be used. Here we present a full information approach to handle missing covariate data that makes it possible to use all available data. A simulation study shows that, independent of missingness scenario, number of covariates or amount of missingness, the full information approach is more powerful than the usual approach. To illustrate the new method, we applied it to test scores on a Dutch national school achievement test (Eindtoets Basisonderwijs) in the final grade of primary school of 990 twin pairs. The effects of school-aggregated measures (e.g. school denomination, pedagogical philosophy, school size) and the effect of the sex of a twin on these test scores were tested. None of the covariates had a significant effect on individual differences in test scores.

  11. Bias and efficiency of multiple imputation compared with complete-case analysis for missing covariate values.

    PubMed

    White, Ian R; Carlin, John B

    2010-12-10

    When missing data occur in one or more covariates in a regression model, multiple imputation (MI) is widely advocated as an improvement over complete-case analysis (CC). We use theoretical arguments and simulation studies to compare these methods with MI implemented under a missing at random assumption. When data are missing completely at random, both methods have negligible bias, and MI is more efficient than CC across a wide range of scenarios. For other missing data mechanisms, bias arises in one or both methods. In our simulation setting, CC is biased towards the null when data are missing at random. However, when missingness is independent of the outcome given the covariates, CC has negligible bias and MI is biased away from the null. With more general missing data mechanisms, bias tends to be smaller for MI than for CC. Since MI is not always better than CC for missing covariate problems, the choice of method should take into account what is known about the missing data mechanism in a particular substantive application. Importantly, the choice of method should not be based on comparison of standard errors. We propose new ways to understand empirical differences between MI and CC, which may provide insights into the appropriateness of the assumptions underlying each method, and we propose a new index for assessing the likely gain in precision from MI: the fraction of incomplete cases among the observed values of a covariate (FICO).

  12. Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.

    PubMed

    Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man

    2009-10-01

    In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.

  13. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-05-13

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  14. ML Estimation of Mean and Covariance Structures with Missing Data Using Complete Data Routines.

    ERIC Educational Resources Information Center

    Jamshidian, Mortaza; Bentler, Peter M.

    1999-01-01

    Describes the maximum likelihood (ML) estimation of mean and covariance structure models when data are missing. Describes expectation maximization (EM), generalized expectation maximization, Fletcher-Powell, and Fisher-scoring algorithms for parameter estimation and shows how software can be used to implement each algorithm. (Author/SLD)

  15. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  16. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    PubMed

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  17. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION

    PubMed Central

    Allen, Genevera I.; Tibshirani, Robert

    2015-01-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823

  18. Simultaneous inference and bias analysis for longitudinal data with covariate measurement error and missing responses.

    PubMed

    Yi, G Y; Liu, W; Wu, Lang

    2011-03-01

    Longitudinal data arise frequently in medical studies and it is common practice to analyze such data with generalized linear mixed models. Such models enable us to account for various types of heterogeneity, including between- and within-subjects ones. Inferential procedures complicate dramatically when missing observations or measurement error arise. In the literature, there has been considerable interest in accommodating either incompleteness or covariate measurement error under random effects models. However, there is relatively little work concerning both features simultaneously. There is a need to fill up this gap as longitudinal data do often have both characteristics. In this article, our objectives are to study simultaneous impact of missingness and covariate measurement error on inferential procedures and to develop a valid method that is both computationally feasible and theoretically valid. Simulation studies are conducted to assess the performance of the proposed method, and a real example is analyzed with the proposed method.

  19. Imputation of missing covariate values in epigenome-wide analysis of DNA methylation data

    PubMed Central

    Wu, Chong; Demerath, Ellen W.; Pankow, James S.; Bressler, Jan; Fornage, Myriam; Grove, Megan L.; Chen, Wei; Guan, Weihua

    2016-01-01

    ABSTRACT DNA methylation is a widely studied epigenetic mechanism and alterations in methylation patterns may be involved in the development of common diseases. Unlike inherited changes in genetic sequence, variation in site-specific methylation varies by tissue, developmental stage, and disease status, and may be impacted by aging and exposure to environmental factors, such as diet or smoking. These non-genetic factors are typically included in epigenome-wide association studies (EWAS) because they may be confounding factors to the association between methylation and disease. However, missing values in these variables can lead to reduced sample size and decrease the statistical power of EWAS. We propose a site selection and multiple imputation (MI) method to impute missing covariate values and to perform association tests in EWAS. Then, we compare this method to an alternative projection-based method. Through simulations, we show that the MI-based method is slightly conservative, but provides consistent estimates for effect size. We also illustrate these methods with data from the Atherosclerosis Risk in Communities (ARIC) study to carry out an EWAS between methylation levels and smoking status, in which missing cell type compositions and white blood cell counts are imputed. PMID:26890800

  20. Imputation of missing covariate values in epigenome-wide analysis of DNA methylation data.

    PubMed

    Wu, Chong; Demerath, Ellen W; Pankow, James S; Bressler, Jan; Fornage, Myriam; Grove, Megan L; Chen, Wei; Guan, Weihua

    2016-01-01

    DNA methylation is a widely studied epigenetic mechanism and alterations in methylation patterns may be involved in the development of common diseases. Unlike inherited changes in genetic sequence, variation in site-specific methylation varies by tissue, developmental stage, and disease status, and may be impacted by aging and exposure to environmental factors, such as diet or smoking. These non-genetic factors are typically included in epigenome-wide association studies (EWAS) because they may be confounding factors to the association between methylation and disease. However, missing values in these variables can lead to reduced sample size and decrease the statistical power of EWAS. We propose a site selection and multiple imputation (MI) method to impute missing covariate values and to perform association tests in EWAS. Then, we compare this method to an alternative projection-based method. Through simulations, we show that the MI-based method is slightly conservative, but provides consistent estimates for effect size. We also illustrate these methods with data from the Atherosclerosis Risk in Communities (ARIC) study to carry out an EWAS between methylation levels and smoking status, in which missing cell type compositions and white blood cell counts are imputed.

  1. A comparison of hospital performance with non-ignorable missing covariates: an application to trauma care data.

    PubMed

    Kirkham, Jamie J

    2008-11-29

    Trauma is a term used in medicine for describing physical injury. The prospective evaluation of the care of injured patients aims to improve the management of a trauma system and acts as an ongoing audit of trauma care. One of the principal techniques used to evaluate the effectiveness of trauma care at different hospitals is through a comparative outcome analysis. In such an analysis, a national 'league table' can be compiled to determine which hospitals are better at managing trauma care. One of the problems with the conventional analysis is that key covariates for measuring physiological injury can often be missing. It is also hypothesized that this missingness is not missing at random (NMAR). We describe the methods used to assess the performance of hospitals in a trauma setting and implement the method of weights for generalized linear models to account for the missing covariate data, when we suspect the missing data mechanism is NMAR using a Monte Carlo EM algorithm. Through simulation work and application to the trauma data we demonstrate the affect the missing covariate data can have on the performance of hospitals and how the conclusions we draw from the analysis can differ. We highlight the differences in hospital performance and the ranking of hospitals.

  2. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data.

    PubMed

    Lobach, Iryna; Mallick, Bani; Carroll, Raymond J

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.

  3. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data

    PubMed Central

    Mallick, Bani; Carroll, Raymond J.

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development. PMID:21949562

  4. Bayesian semiparametric nonlinear mixed-effects joint models for data with skewness, missing responses, and measurement errors in covariates.

    PubMed

    Huang, Yangxin; Dagne, Getachew

    2012-09-01

    It is a common practice to analyze complex longitudinal data using semiparametric nonlinear mixed-effects (SNLME) models with a normal distribution. Normality assumption of model errors may unrealistically obscure important features of subject variations. To partially explain between- and within-subject variations, covariates are usually introduced in such models, but some covariates may often be measured with substantial errors. Moreover, the responses may be missing and the missingness may be nonignorable. Inferential procedures can be complicated dramatically when data with skewness, missing values, and measurement error are observed. In the literature, there has been considerable interest in accommodating either skewness, incompleteness or covariate measurement error in such models, but there has been relatively little study concerning all three features simultaneously. In this article, our objective is to address the simultaneous impact of skewness, missingness, and covariate measurement error by jointly modeling the response and covariate processes based on a flexible Bayesian SNLME model. The method is illustrated using a real AIDS data set to compare potential models with various scenarios and different distribution specifications.

  5. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  6. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  7. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 3 2011-04-01 2011-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  8. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  9. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 3 2012-04-01 2012-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  10. Experimental Uncertainty and Covariance Information in EXFOR Library

    NASA Astrophysics Data System (ADS)

    Otuka, N.; Capote, R.; Kopecky, S.; Plompen, A. J. M.; Pronyaev, V. G.; Schillebeeckx, P.; Smith, D. L.

    2012-05-01

    Compilation of experimental uncertainty and covariance information in the EXFOR Library is discussed. Following the presentation of a brief history of information provided in the EXFOR Library, the current EXFOR Formats and their limitations are reviewed. Proposed extensions for neutron-induced reaction cross sections in the fast neutron region and resonance region are also presented.

  11. Dealing with missing covariates in epidemiologic studies: a comparison between multiple imputation and a full Bayesian approach.

    PubMed

    Erler, Nicole S; Rizopoulos, Dimitris; Rosmalen, Joost van; Jaddoe, Vincent W V; Franco, Oscar H; Lesaffre, Emmanuel M E H

    2016-07-30

    Incomplete data are generally a challenge to the analysis of most large studies. The current gold standard to account for missing data is multiple imputation, and more specifically multiple imputation with chained equations (MICE). Numerous studies have been conducted to illustrate the performance of MICE for missing covariate data. The results show that the method works well in various situations. However, less is known about its performance in more complex models, specifically when the outcome is multivariate as in longitudinal studies. In current practice, the multivariate nature of the longitudinal outcome is often neglected in the imputation procedure, or only the baseline outcome is used to impute missing covariates. In this work, we evaluate the performance of MICE using different strategies to include a longitudinal outcome into the imputation models and compare it with a fully Bayesian approach that jointly imputes missing values and estimates the parameters of the longitudinal model. Results from simulation and a real data example show that MICE requires the analyst to correctly specify which components of the longitudinal process need to be included in the imputation models in order to obtain unbiased results. The full Bayesian approach, on the other hand, does not require the analyst to explicitly specify how the longitudinal outcome enters the imputation models. It performed well under different scenarios. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.

  13. Variable Selection and Inference Procedures for Marginal Analysis of Longitudinal Data with Missing Observations and Covariate Measurement Error

    PubMed Central

    Yi, Grace Y.; Tan, Xianming; Li, Runze

    2015-01-01

    Summary In contrast to extensive attention on model selection for univariate data, research on model selection for longitudinal data remains largely unexplored. This is particularly the case when data are subject to missingness and measurement error. To address this important problem, we propose marginal methods that simultaneously carry out model selection and estimation for longitudinal data with missing responses and error-prone covariates. Our method have several appealing features: the applicability is broad because the methods are developed for a unified framework with marginal generalized linear models; model assumptions are minimal in that no full distribution is required for the response process and the distribution of the mismeasured covariates is left unspecified; and the implementation is straightforward. To justify the proposed methods, we provide both theoretical properties and numerical assessments. PMID:26877582

  14. MISSE in the Materials and Processes Technical Information System (MAPTIS )

    NASA Technical Reports Server (NTRS)

    Burns, DeWitt; Finckenor, Miria; Henrie, Ben

    2013-01-01

    Materials International Space Station Experiment (MISSE) data is now being collected and distributed through the Materials and Processes Technical Information System (MAPTIS) at Marshall Space Flight Center in Huntsville, Alabama. MISSE data has been instrumental in many programs and continues to be an important source of data for the space community. To facilitate great access to the MISSE data the International Space Station (ISS) program office and MAPTIS are working to gather this data into a central location. The MISSE database contains information about materials, samples, and flights along with pictures, pdfs, excel files, word documents, and other files types. Major capabilities of the system are: access control, browsing, searching, reports, and record comparison. The search capabilities will search within any searchable files so even if the desired meta-data has not been associated data can still be retrieved. Other functionality will continue to be added to the MISSE database as the Athena Platform is expanded

  15. Information Gaps: The Missing Links to Learning.

    ERIC Educational Resources Information Center

    Adams, Carl R.

    Communication takes place when a speaker conveys new information to the listener. In second language teaching, information gaps motivate students to use and learn the target language in order to obtain information. The resulting interactive language use may develop affective bonds among the students. A variety of classroom techniques are available…

  16. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  17. Missing Data Imputation versus Full Information Maximum Likelihood with Second-Level Dependencies

    ERIC Educational Resources Information Center

    Larsen, Ross

    2011-01-01

    Missing data in the presence of upper level dependencies in multilevel models have never been thoroughly examined. Whereas first-level subjects are independent over time, the second-level subjects might exhibit nonzero covariances over time. This study compares 2 missing data techniques in the presence of a second-level dependency: multiple…

  18. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Use of Official Mail in the Location and Recovery of Missing Children § 1.705 Restrictions on use of missing children information. Missing children pictures...

  19. Informed conditioning on clinical covariates increases power in case-control association studies.

    PubMed

    Zaitlen, Noah; Lindström, Sara; Pasaniuc, Bogdan; Cornelis, Marilyn; Genovese, Giulio; Pollack, Samuela; Barton, Anne; Bickeböller, Heike; Bowden, Donald W; Eyre, Steve; Freedman, Barry I; Friedman, David J; Field, John K; Groop, Leif; Haugen, Aage; Heinrich, Joachim; Henderson, Brian E; Hicks, Pamela J; Hocking, Lynne J; Kolonel, Laurence N; Landi, Maria Teresa; Langefeld, Carl D; Le Marchand, Loic; Meister, Michael; Morgan, Ann W; Raji, Olaide Y; Risch, Angela; Rosenberger, Albert; Scherf, David; Steer, Sophia; Walshaw, Martin; Waters, Kevin M; Wilson, Anthony G; Wordsworth, Paul; Zienolddiny, Shanbeh; Tchetgen, Eric Tchetgen; Haiman, Christopher; Hunter, David J; Plenge, Robert M; Worthington, Jane; Christiani, David C; Schaumberg, Debra A; Chasman, Daniel I; Altshuler, David; Voight, Benjamin; Kraft, Peter; Patterson, Nick; Price, Alkes L

    2012-01-01

    Genetic case-control association studies often include data on clinical covariates, such as body mass index (BMI), smoking status, or age, that may modify the underlying genetic risk of case or control samples. For example, in type 2 diabetes, odds ratios for established variants estimated from low-BMI cases are larger than those estimated from high-BMI cases. An unanswered question is how to use this information to maximize statistical power in case-control studies that ascertain individuals on the basis of phenotype (case-control ascertainment) or phenotype and clinical covariates (case-control-covariate ascertainment). While current approaches improve power in studies with random ascertainment, they often lose power under case-control ascertainment and fail to capture available power increases under case-control-covariate ascertainment. We show that an informed conditioning approach, based on the liability threshold model with parameters informed by external epidemiological information, fully accounts for disease prevalence and non-random ascertainment of phenotype as well as covariates and provides a substantial increase in power while maintaining a properly controlled false-positive rate. Our method outperforms standard case-control association tests with or without covariates, tests of gene x covariate interaction, and previously proposed tests for dealing with covariates in ascertained data, with especially large improvements in the case of case-control-covariate ascertainment. We investigate empirical case-control studies of type 2 diabetes, prostate cancer, lung cancer, breast cancer, rheumatoid arthritis, age-related macular degeneration, and end-stage kidney disease over a total of 89,726 samples. In these datasets, informed conditioning outperforms logistic regression for 115 of the 157 known associated variants investigated (P-value = 1 × 10(-9)). The improvement varied across diseases with a 16% median increase in χ(2) test statistics and a

  20. An ICU Preanesthesia Evaluation Form Reduces Missing Preoperative Key Information

    PubMed Central

    Chuy, Katherine; Yan, Zhe; Fleisher, Lee; Liu, Renyu

    2013-01-01

    Background A comprehensive preoperative evaluation is critical for providing anesthetic care for patients from the intensive care unit (ICU). There has been no preoperative evaluation form specific for ICU patients that allows for a rapid and focused evaluation by anesthesia providers, including junior residents. In this study, a specific preoperative form was designed for ICU patients and evaluated to allow residents to perform the most relevant and important preoperative evaluations efficiently. Methods The following steps were utilized for developing the preoperative evaluation form: 1) designed a new preoperative form specific for ICU patients; 2) had the form reviewed by attending physicians and residents, followed by multiple revisions; 3) conducted test releases and revisions; 4) released the final version and conducted a survey; 5) compared data collection from new ICU form with that from a previously used generic form. Each piece of information on the forms was assigned a score, and the score for the total missing information was determined. The score for each form was presented as mean ± standard deviation (SD), and compared by unpaired t test. A P value < 0.05 was considered statistically significant. Results Of 52 anesthesiologists (19 attending physicians, 33 residents) responding to the survey, 90% preferred the final new form; and 56% thought the new form would reduce perioperative risk for ICU patients. Forty percent were unsure whether the form would reduce perioperative risk. Over a three month period, we randomly collected 32 generic forms and 25 new forms. The average score for missing data was 23 ± 10 for the generic form and 8 ± 4 for the new form (P = 2.58E-11). Conclusions A preoperative evaluation form designed specifically for ICU patients is well accepted by anesthesia providers and helped to reduce missing key preoperative information. Such an approach is important for perioperative patient safety. PMID:23853741

  1. Evaluation of covariance and information performance measures for dynamic object tracking

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Blasch, Erik; Douville, Phil; Kaplan, Lance; Qiu, Di

    2010-04-01

    In surveillance and reconnaissance applications, dynamic objects are dynamically followed by track filters with sequential measurements. There are two popular implementations of tracking filters: one is the covariance or Kalman filter and the other is the information filter. Evaluation of tracking filters is important in performance optimization not only for tracking filter design but also for resource management. Typically, the information matrix is the inverse of the covariance matrix. The covariance filter-based approaches attempt to minimize the covariance matrix-based scalar indexes whereas the information filter-based methods aim at maximizing the information matrix-based scalar indexes. Such scalar performance measures include the trace, determinant, norms (1-norm, 2-norm, infinite-norm, and Forbenius norm), and eigenstructure of the covariance matrix or the information matrix and their variants. One natural question to ask is if the scalar track filter performance measures applied to the covariance matrix are equivalent to those applied to the information matrix? In this paper we show most of the scalar performance indexes are equivalent yet some are not. As a result, the indexes if used improperly would provide an "optimized" solution but in the wrong sense relative to track accuracy. The simulation indicated that all the seven indexes were successful when applied to the covariance matrix. However, the failed indexes for the information filter include the trace and the four norms (as defined in MATLAB) of the information matrix. Nevertheless, the determinant and the properly selected eigenvalue of the information matrix were successful to select the optimal sensor update configuration. The evaluation analysis of track measures can serve as a guideline to determine the suitability of performance measures for tracking filter design and resource management.

  2. Estimating Missing Features to Improve Multimedia Information Retrieval

    SciTech Connect

    Bagherjeiran, A; Love, N S; Kamath, C

    2006-09-28

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.

  3. Training Neural Networks to See Beyond Missing Information

    NASA Astrophysics Data System (ADS)

    Howard, M. E.; Schradin, L. J.; Cizewski, J. A.

    2012-10-01

    While the human eye may easily see a distorted image and imagine the original image, a rigorous mathematical treatment of the reconstruction may turn out to be a programming nightmare. We present a case study of nuclear physics data for which a significant population of events from a microchannel plate (MCP) detector are missing information for one of four MCP corners. Using events with good data for all four MCP corners to train a neural network, events with only three good corners are treated on equal footing in the analysis of position measurements, recovering much needed statistics. As this neural network is available within the framework of standard physics analysis packages such as ROOT and PAW, implementation is quite straightforward. We conclude with a discussion of the obvious advantages and limitations of this method as compared with an analytic approach. Work supported in part by the National Science Foundation and the Department of Energy.

  4. Electronic pharmacopoeia: a missed opportunity for safe opioid prescribing information?

    PubMed

    Lapoint, Jeff; Perrone, Jeanmarie; Nelson, Lewis S

    2014-03-01

    Errors in prescribing of dangerous medications, such as extended release or long acting (ER/LA) opioid forlmulations, remain an important cause of patient harm. Prescribing errors often relate to the failure to note warnings regarding contraindications and drug interactions. Many prescribers utilize electronic pharmacopoeia (EP) to improve medication ordering. The purpose of this study is to assess the ability of commonly used apps to provide accurate safety information about the boxed warning for ER/LA opioids. We evaluated a convenience sample of six popular EP apps available for the iPhone and an online reference for the presence of relevant safety warnings. We accessed the dosing information for each of six ER/LA medications and assessed for the presence of an easily identifiable indication that a boxed warning was present, even if the warning itself was not provided. The prominence of precautionary drug information presented to the user was assessed for each app. Provided information was classified based on the presence of the warning in the ordering pathway, located separately but within the prescribers view, or available in a separate screen of the drug information but non-highlighted. Each program provided a consistent level of warning information for each of the six ER/LA medications. Only 2/7 programs placed a warning in line with dosing information (level 1); 3/7 programs offered level 2 warning and 1/7 offered level 3 warning. One program made no mention of a boxed warning. Most EP apps isolate important safety warnings, and this represents a missed opportunity to improve prescribing practices.

  5. Informed Conditioning on Clinical Covariates Increases Power in Case-Control Association Studies

    PubMed Central

    Zaitlen, Noah; Lindström, Sara; Pasaniuc, Bogdan; Cornelis, Marilyn; Genovese, Giulio; Pollack, Samuela; Barton, Anne; Bickeböller, Heike; Bowden, Donald W.; Eyre, Steve; Freedman, Barry I.; Friedman, David J.; Field, John K.; Groop, Leif; Haugen, Aage; Heinrich, Joachim; Henderson, Brian E.; Hicks, Pamela J.; Hocking, Lynne J.; Kolonel, Laurence N.; Landi, Maria Teresa; Langefeld, Carl D.; Le Marchand, Loic; Meister, Michael; Morgan, Ann W.; Raji, Olaide Y.; Risch, Angela; Rosenberger, Albert; Scherf, David; Steer, Sophia; Walshaw, Martin; Waters, Kevin M.; Wilson, Anthony G.; Wordsworth, Paul; Zienolddiny, Shanbeh; Tchetgen, Eric Tchetgen; Haiman, Christopher; Hunter, David J.; Plenge, Robert M.; Worthington, Jane; Christiani, David C.; Schaumberg, Debra A.; Chasman, Daniel I.; Altshuler, David; Voight, Benjamin; Kraft, Peter; Patterson, Nick; Price, Alkes L.

    2012-01-01

    Genetic case-control association studies often include data on clinical covariates, such as body mass index (BMI), smoking status, or age, that may modify the underlying genetic risk of case or control samples. For example, in type 2 diabetes, odds ratios for established variants estimated from low–BMI cases are larger than those estimated from high–BMI cases. An unanswered question is how to use this information to maximize statistical power in case-control studies that ascertain individuals on the basis of phenotype (case-control ascertainment) or phenotype and clinical covariates (case-control-covariate ascertainment). While current approaches improve power in studies with random ascertainment, they often lose power under case-control ascertainment and fail to capture available power increases under case-control-covariate ascertainment. We show that an informed conditioning approach, based on the liability threshold model with parameters informed by external epidemiological information, fully accounts for disease prevalence and non-random ascertainment of phenotype as well as covariates and provides a substantial increase in power while maintaining a properly controlled false-positive rate. Our method outperforms standard case-control association tests with or without covariates, tests of gene x covariate interaction, and previously proposed tests for dealing with covariates in ascertained data, with especially large improvements in the case of case-control-covariate ascertainment. We investigate empirical case-control studies of type 2 diabetes, prostate cancer, lung cancer, breast cancer, rheumatoid arthritis, age-related macular degeneration, and end-stage kidney disease over a total of 89,726 samples. In these datasets, informed conditioning outperforms logistic regression for 115 of the 157 known associated variants investigated (P-value = 1×10−9). The improvement varied across diseases with a 16% median increase in χ2 test statistics and a

  6. Quantifying lost information due to covariance matrix estimation in parameter inference

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heavens, Alan F.

    2017-02-01

    Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing the Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit, finding that significantly fewer simulations than previously thought are sufficient to reach satisfactory precisions. We apply our results to DES Science Verification weak lensing data, detecting a 10 per cent loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1 per cent, with an additional uncertainty of about 2 per cent. Without any nuisance parameters, 1900 simulations are sufficient to only lose 1 per cent of information. We further derive estimators for all quantities needed for forecasting with estimated covariance matrices. Our formalism allows to determine the sweetspot between running sophisticated simulations to reduce the number of nuisance parameters, and running as many fast simulations as possible.

  7. The Role of Mechanism and Covariation Information in Causal Belief Updating

    ERIC Educational Resources Information Center

    Perales, Jose C.; Catena, Andres; Maldonado, Antonio; Candido, Antonio

    2007-01-01

    The present study is aimed at identifying how prior causal beliefs and covariation information contribute to belief updating when evidence, either compatible or contradictory with those beliefs, is provided. Participants were presented with a cover story with which it was intended to activate or generate a causal belief. Variables related to the…

  8. Covariance Manipulation for Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.

  9. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  10. Operation reliability assessment for cutting tools by applying a proportional covariate model to condition monitoring information.

    PubMed

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-09-25

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.

  11. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Use of Official Mail in the Location and Recovery of Missing...

  12. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Use of Official Mail in the Location and Recovery of Missing...

  13. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Use of Official Mail in the Location and Recovery of Missing...

  14. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Use of Official Mail in the Location and Recovery of Missing...

  15. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  16. Statistical inference for Hardy-Weinberg proportions in the presence of missing genotype information.

    PubMed

    Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor

    2013-01-01

    In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.

  17. 78 FR 55123 - Submission for Review: We Need Information About Your Missing Payment, RI 38-31

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... MANAGEMENT Submission for Review: We Need Information About Your Missing Payment, RI 38-31 AGENCY: U.S... (ICR) 3206-0187, We Need Information About Your Missing Payment, RI 38-31. As required by the Paperwork... Services, Office of Personnel Management. Title: We Need Information About Your Missing Payment. OMB:...

  18. Sensitivity Analysis of Multiple Informant Models When Data are Not Missing at Random.

    PubMed

    Blozis, Shelley A; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N; Shaw, Daniel S; Neiderhiser, Jenae; Scaramella, Laura; Leve, Leslie; Reiss, David

    2013-12-31

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups may be retained even if only one member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that may also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches may result in a data analysis problem for which the missingness is ignorable. This paper considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates to assumptions about missing data, a strategy that may be easily implemented using SEM software.

  19. Information Literacy: The Missing Link in Early Childhood Education

    ERIC Educational Resources Information Center

    Heider, Kelly L.

    2009-01-01

    The rapid growth of information over the last 30 or 40 years has made it impossible for educators to prepare students for the future without teaching them how to be effective information managers. The American Library Association refers to those students who manage information effectively as "information literate." Information literacy instruction…

  20. Responsiveness-informed multiple imputation and inverse probability-weighting in cohort studies with missing data that are non-monotone or not missing at random.

    PubMed

    Doidge, James C

    2016-03-16

    Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.

  1. Open Informational Ecosystems: The Missing Link for Sharing Educational Resources

    ERIC Educational Resources Information Center

    Kerres, Michael; Heinen, Richard

    2015-01-01

    Open educational resources are not available "as such". Their provision relies on a technological infrastructure of related services that can be described as an informational ecosystem. A closed informational ecosystem keeps educational resources within its boundary. An open informational ecosystem relies on the concurrence of…

  2. Exchanging Missing Information in Tasks: Old and New Interpretations

    ERIC Educational Resources Information Center

    Jenks, Christopher Joseph

    2009-01-01

    Information gap tasks have played a key role in applied linguistics (Pica, 2005). For example, extensive research has been conducted using information gap tasks to elicit second language data. Yet, despite their prominent role in research and pedagogy, there is still much to be investigated with regard to what information gap tasks offer research…

  3. Fraction of Missing Information (γ) at Different Missing Data Fractions in the 2012 NAMCS Physician Workflow Mail Survey*

    PubMed Central

    Pan, Qiyuan; Wei, Rong

    2016-01-01

    In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)−1/2, where m is the number of imputations, leading to the conclusion that a small m (≤5) would be sufficient for MI. However, evidence has been accumulating that many more imputations are needed. Why would the apparently sufficient m deduced from the RE be actually too small? The answer may lie with γ. In this research, γ was determined at the fractions of missing data (δ) of 4%, 10%, 20%, and 29% using the 2012 Physician Workflow Mail Survey of the National Ambulatory Medical Care Survey (NAMCS). The γ values were strikingly small, ranging in the order of 10−6 to 0.01. As δ increased, γ usually increased but sometimes decreased. How the data were analysed had the dominating effects on γ, overshadowing the effect of δ. The results suggest that it is impossible to predict γ using δ and that it may not be appropriate to use the γ-based RE to determine sufficient m. PMID:27398259

  4. The Missing Link: Evolving Accessibility To Formulary-Related Information

    PubMed Central

    Van Rossum, Alison; Holsopple, Megan; Karpinski, Julie; Dow, Jordan

    2016-01-01

    Background Formulary management is a key component to ensuring the safe, effective, and fiscally responsible use of medications for health systems. One challenge in the formulary management process is making the most relevant formulary information easily accessible to practitioners involved in medication therapy decisions at the point of care. In September 2014, Froedtert and the Medical College of Wisconsin (F&MCW) implemented a commercial formulary management tool (CFMT) to improve accessibility to the recently aligned health-system formulary. The CFMT replaced an internally developed formulary management tool. Objectives The primary objective was to determine pharmacist end-user satisfaction with accessibility to system formulary and formulary-related information through a new CMFT compared with the historical formulary management tool (HFMT). The secondary objective was to measure the use of formulary-related information in the CFMT and HFMT. Methods The primary objective was measured through pharmacist end-user satisfaction surveys before and after integration of formulary-related information into the CFMT. The secondary objective was measured by comparing monthly usage reports for the CFMT with monthly usage reports for the HFMT. Results Survey respondents reported being satisfied (52.5%) or very satisfied (18.8%) more frequently with the CFMT compared with the HFMT (31.7% satisfied and 2.5% very satisfied). Between October 2014 and January 2015 the frequency of access to formulary-related information increased from 92 to 104 requests per day through the CFMT and decreased from 47 to 33 requests per day through the HFMT. Conclusions Initial data suggest incorporating system formulary-related information and related resources into a single platform increases pharmacist end-user satisfaction and overall use of formulary-related information. PMID:27904302

  5. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  6. Questions Left Unanswered: How the Brain Responds to Missing Information

    PubMed Central

    Hoeks, John C. J.; Stowe, Laurie A.; Hendriks, Petra; Brouwer, Harm

    2013-01-01

    It sometimes happens that when someone asks a question, the addressee does not give an adequate answer, for instance by leaving out part of the required information. The person who posed the question may wonder why the information was omitted, and engage in extensive processing to find out what the partial answer actually means. The present study looks at the neural correlates of the pragmatic processes invoked by partial answers to questions. Two experiments are presented in which participants read mini-dialogues while their Event-Related brain Potentials (ERPs) are being measured. In both experiments, violating the dependency between questions and answers was found to lead to an increase in the amplitude of the P600 component. We interpret these P600-effects as reflecting the increased effort in creating a coherent representation of what is communicated. This effortful processing might include the computation of what the dialogue participant meant to communicate by withholding information. Our study is one of few investigating language processing in conversation, be it that our participants were ‘eavesdroppers’ instead of real interactants. Our results contribute to the as of yet small range of pragmatic phenomena that modulate the processes underlying the P600 component, and suggest that people immediately attempt to regain cohesion if a question-answer dependency is violated in an ongoing conversation. PMID:24098327

  7. Analyzing semi-competing risks data with missing cause of informative terminal event.

    PubMed

    Zhou, Renke; Zhu, Hong; Bondy, Melissa; Ning, Jing

    2017-02-28

    Cancer studies frequently yield multiple event times that correspond to landmarks in disease progression, including non-terminal events (i.e., cancer recurrence) and an informative terminal event (i.e., cancer-related death). Hence, we often observe semi-competing risks data. Work on such data has focused on scenarios in which the cause of the terminal event is known. However, in some circumstances, the information on cause for patients who experience the terminal event is missing; consequently, we are not able to differentiate an informative terminal event from a non-informative terminal event. In this article, we propose a method to handle missing data regarding the cause of an informative terminal event when analyzing the semi-competing risks data. We first consider the nonparametric estimation of the survival function for the terminal event time given missing cause-of-failure data via the expectation-maximization algorithm. We then develop an estimation method for semi-competing risks data with missing cause of the terminal event, under a pre-specified semiparametric copula model. We conduct simulation studies to investigate the performance of the proposed method. We illustrate our methodology using data from a study of early-stage breast cancer. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Modeling Achievement Trajectories when Attrition Is Informative

    ERIC Educational Resources Information Center

    Feldman, Betsy J.; Rabe-Hesketh, Sophia

    2012-01-01

    In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…

  9. Relying on Your Own Best Judgment: Imputing Values to Missing Information in Decision Making.

    ERIC Educational Resources Information Center

    Johnson, Richard D.; And Others

    Processes involved in making estimates of the value of missing information that could help in a decision making process were studied. Hypothetical purchases of ground beef were selected for the study as such purchases have the desirable property of quantifying both the price and quality. A total of 150 students at the University of Iowa rated the…

  10. Individual Information-Centered Approach for Handling Physical Activity Missing Data

    ERIC Educational Resources Information Center

    Kang, Minsoo; Rowe, David A.; Barreira, Tiago V.; Robinson, Terrance S.; Mahar, Matthew T.

    2009-01-01

    The purpose of this study was to validate individual information (II)-centered methods for handling missing data, using data samples of 118 middle-aged adults and 91 older adults equipped with Yamax SW-200 pedometers and Actigraph accelerometers for 7 days. We used a semisimulation approach to create six data sets: three physical activity outcome…

  11. The Relative Performance of Full Information Maximum Likelihood Estimation for Missing Data in Structural Equation Models.

    ERIC Educational Resources Information Center

    Enders, Craig K.; Bandalos, Deborah L.

    2001-01-01

    Used Monte Carlo simulation to examine the performance of four missing data methods in structural equation models: (1)full information maximum likelihood (FIML); (2) listwise deletion; (3) pairwise deletion; and (4) similar response pattern imputation. Results show that FIML estimation is superior across all conditions of the design. (SLD)

  12. Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David

    2013-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…

  13. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  14. Analyzing disease recurrence with missing at risk information.

    PubMed

    Štupnik, Tomaž; Pohar Perme, Maja

    2016-03-30

    When analyzing time to disease recurrence, we sometimes need to work with data where all the recurrences are recorded, but no information is available on the possible deaths. This may occur when studying diseases of benign nature where patients are only seen at disease recurrences or in poorly-designed registries of benign diseases or medical device implantations without sufficient patient identifiers to obtain their dead/alive status at a later date. When the average time to disease recurrence is long enough in comparison with the expected survival of the patients, statistical analysis of such data can be significantly biased. Under the assumption that the expected survival of an individual is not influenced by the disease itself, general population mortality tables may be used to remove this bias. We show why the intuitive solution of simply imputing the patient's expected survival time does not give unbiased estimates of the usual quantities of interest in survival analysis and further explain that cumulative incidence function analysis does not require additional assumptions on general population mortality. We provide an alternative framework that allows unbiased estimation and introduce two new approaches: an iterative imputation method and a mortality adjusted at risk function. Their properties are carefully studied, with the results supported by simulations and illustrated on a real-world example.

  15. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  16. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  17. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  18. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  19. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    NASA Astrophysics Data System (ADS)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  20. Covariance Manipulation for Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Use of probability of collision (Pc) has brought sophistication to CA. Made possible by JSpOC precision catalogue because provides covariance. Has essentially replaced miss distance as basic CA parameter. Embrace of Pc has elevated methods to 'manipulate' covariance to enable/improve CA calculations. Two such methods to be examined here; compensation for absent or unreliable covariances through 'Maximum Pc' calculation constructs, projection (not propagation) of epoch covariances forward in time to try to enable better risk assessments. Two questions to be answered about each; situations to which such approaches are properly applicable, amount of utility that such methods offer.

  1. FW: An R Package for Finlay-Wilkinson Regression that Incorporates Genomic/Pedigree Information and Covariance Structures Between Environments.

    PubMed

    Lian, Lian; de Los Campos, Gustavo

    2015-12-29

    The Finlay-Wilkinson regression (FW) is a popular method among plant breeders to describe genotype by environment interaction. The standard implementation is a two-step procedure that uses environment (sample) means as covariates in a within-line ordinary least squares (OLS) regression. This procedure can be suboptimal for at least four reasons: (1) in the first step environmental means are typically estimated without considering genetic-by-environment interactions, (2) in the second step uncertainty about the environmental means is ignored, (3) estimation is performed regarding lines and environment as fixed effects, and (4) the procedure does not incorporate genetic (either pedigree-derived or marker-derived) relationships. Su et al. proposed to address these problems using a Bayesian method that allows simultaneous estimation of environmental and genotype parameters, and allows incorporation of pedigree information. In this article we: (1) extend the model presented by Su et al. to allow integration of genomic information [e.g., single nucleotide polymorphism (SNP)] and covariance between environments, (2) present an R package (FW) that implements these methods, and (3) illustrate the use of the package using examples based on real data. The FW R package implements both the two-step OLS method and a full Bayesian approach for Finlay-Wilkinson regression with a very simple interface. Using a real wheat data set we demonstrate that the prediction accuracy of the Bayesian approach is consistently higher than the one achieved by the two-step OLS method.

  2. A Cautionary Note on the Use of Information Fit Indexes in Covariance Structure Modeling with Means

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases in which models without mean restrictions (i.e.,…

  3. Miss Heroin.

    ERIC Educational Resources Information Center

    Riley, Bernice

    This script, with music, lyrics and dialog, was written especially for youngsters to inform them of the potential dangers of various drugs. The author, who teaches in an elementary school in Harlem, New York, offers Miss Heroin as her answer to the expressed opinion that most drug and alcohol information available is either too simplified and…

  4. Reconstructing missing information on precipitation datasets: impact of tails on adopted statistical distributions.

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Beckie, Roger Daniel

    2014-05-01

    Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were

  5. [The Hospital Information System of the Brazilian Unified National Health System: a performance evaluation for auditing maternal near miss].

    PubMed

    Nakamura-Pereira, Marcos; Mendes-Silva, Wallace; Dias, Marcos Augusto Bastos; Reichenheim, Michael E; Lobato, Gustavo

    2013-07-01

    This study aimed to investigate the performance of the Hospital Information System of the Brazilian Unified National Health System (SIH-SUS) in identifying cases of maternal near miss in a hospital in Rio de Janeiro, Brazil, in 2008. Cases were identified by reviewing medical records of pregnant and postpartum women admitted to the hospital. The search for potential near miss events in the SIH-SUS database relied on a list of procedures and codes from the International Classification of Diseases, 10th revision (ICD-10) that were consistent with this diagnosis. The patient chart review identified 27 cases, while 70 potential occurrences of near miss were detected in the SIH-SUS database. However, only 5 of 70 were "true cases" of near miss according to the chart review, which corresponds to a sensitivity of 18.5% (95%CI: 6.3-38.1), specificity of 94.3% (95%CI: 92.8-95.6), area under the ROC of 0.56 (95%CI: 0.48-0.63), and positive predictive value of 10.1% (IC95%: 4.7-20.3). These findings suggest that SIH-SUS does not appear appropriate for monitoring maternal near miss.

  6. Using Incidence Sampling to Estimate Covariances.

    ERIC Educational Resources Information Center

    Knapp, Thomas R.

    1979-01-01

    This paper presents the generalized symmetric means approach to the estimation of population covariances, complete with derivations and examples. Particular attention is paid to the problem of missing data, which is handled very naturally in the incidence sampling framework. (CTM)

  7. Missed bleeding events after ticagrelor in PEGASUS trial: Massive non-compliance, information censoring, or both?

    PubMed

    Serebruany, Victor; Tomek, Ales

    2016-07-15

    PEGASUS trial reported reduction of composite primary endpoint after conventional 180mg/daily ticagrelor (CT), and lower 120mg/daily dose ticagrelor (LT) at expense of extra bleeding. Following approval of CT and LT for long-term secondary prevention indication, recent FDA review verified some bleeding outcomes in PEGASUS. To compare the risks after CT and LT against placebo by seven TIMI scale variables, and 9 bleeding categories considered as serious adverse events (SAE) in light of PEGASUS drug discontinuation rates (DDR). The DDR in all PEGASUS arms was high reaching astronomical 32% for CT. The distribution of some outcomes (TIMI major, trauma, epistaxis, iron deficiency, hemoptysis, and anemia) was reasonable. However, the TIMI minor events were heavily underreported when compared to similar trials. Other bleedings (intracranial, spontaneous, hematuria, and gastrointestinal) appear sporadic, lacking expected dose-dependent impact of CT and LT. Few SAE outcomes (fatal, ecchymosis, hematoma, bruises, bleeding) paradoxically reported more bleeding after LT than after CT. Many bleeding outcomes were probably missed in PEGASUS potentially due to massive non-compliance, information censoring, or both. The FDA must improve reporting of trial outcomes especially in the sponsor-controlled environment when DDR and incomplete follow-up rates are high.

  8. A Cautious Note on Auxiliary Variables That Can Increase Bias in Missing Data Problems.

    PubMed

    Thoemmes, Felix; Rose, Norman

    2014-01-01

    The treatment of missing data in the social sciences has changed tremendously during the last decade. Modern missing data techniques such as multiple imputation and full-information maximum likelihood are used much more frequently. These methods assume that data are missing at random. One very common approach to increase the likelihood that missing at random is achieved consists of including many covariates as so-called auxiliary variables. These variables are either included based on data considerations or in an inclusive fashion; that is, taking all available auxiliary variables. In this article, we point out that there are some instances in which auxiliary variables exhibit the surprising property of increasing bias in missing data problems. In a series of focused simulation studies, we highlight some situations in which this type of biasing behavior can occur. We briefly discuss possible ways how one can avoid selecting bias-inducing covariates as auxiliary variables.

  9. Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed.

    PubMed

    Ehinger, Krista A; Allen, Kala; Wolfe, Jeremy M

    2016-05-01

    Previous work has shown that human observers discount or neglect cast shadows in natural and artificial scenes across a range of visual tasks. This is a reasonable strategy for a visual system designed to recognize objects under a range of lighting conditions, since cast shadows are not intrinsic properties of the scene-they look different (or disappear entirely) under different lighting conditions. However, cast shadows can convey useful information about the three-dimensional shapes of objects and their spatial relations. In this study, we investigated how well people detect changes to cast shadows, presented in natural scenes in a change blindness paradigm, and whether shadow changes that imply the movement or disappearance of an object are more easily noticed than shadow changes that imply a change in lighting. In Experiment 1, a critical object's shadow was removed, rotated to another direction, or shifted down to suggest that the object was floating. All of these shadow changes were noticed less often than changes to physical objects or surfaces in the scene, and there was no difference in the detection rates for the three types of changes. In Experiment 2, the shadows of visible or occluded objects were removed from the scenes. Although removing the cast shadow of an occluded object could be seen as an object deletion, both types of shadow changes were noticed less often than deletions of the visible, physical objects in the scene. These results show that even informative shadow changes are missed, suggesting that cast shadows are discounted fairly early in the processing of natural scenes.

  10. Predicting New Hampshire Indoor Radon Concentrations from geologic information and other covariates

    SciTech Connect

    Apte, M.G.; Price, P.N.; Nero, A.V.; Revzan, K.L.

    1998-05-01

    Generalized geologic province information and data on house construction were used to predict indoor radon concentrations in New Hampshire (NH). A mixed-effects regression model was used to predict the geometric mean (GM) short-term radon concentrations in 259 NH towns. Bayesian methods were used to avoid over-fitting and to minimize the effects of small sample variation within towns. Data from a random survey of short-term radon measurements, individual residence building characteristics, along with geologic unit information, and average surface radium concentration by town, were variables used in the model. Predicted town GM short-term indoor radon concentrations for detached houses with usable basements range from 34 Bq/m{sup 3} (1 pCi/l) to 558 Bq/m{sup 3} (15 pCi/l), with uncertainties of about 30%. A geologic province consisting of glacial deposits and marine sediments, was associated with significantly elevated radon levels, after adjustment for radium concentration, and building type. Validation and interpretation of results are discussed.

  11. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  12. Missing data exploration: highlighting graphical presentation of missing pattern.

    PubMed

    Zhang, Zhongheng

    2015-12-01

    Functions shipped with R base can fulfill many tasks of missing data handling. However, because the data volume of electronic medical record (EMR) system is always very large, more sophisticated methods may be helpful in data management. The article focuses on missing data handling by using advanced techniques. There are three types of missing data, that is, missing completely at random (MCAR), missing at random (MAR) and not missing at random (NMAR). This classification system depends on how missing values are generated. Two packages, Multivariate Imputation by Chained Equations (MICE) and Visualization and Imputation of Missing Values (VIM), provide sophisticated functions to explore missing data pattern. In particular, the VIM package is especially helpful in visual inspection of missing data. Finally, correlation analysis provides information on the dependence of missing data on other variables. Such information is useful in subsequent imputations.

  13. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  14. An Upper Bound on High Speed Satellite Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.

  15. Accuracy of growth model parameters: effects of frequency and duration of data collection, and missing information.

    PubMed

    Aggrey, Samuel E

    2008-01-01

    This study was done to compare the accuracy of prediction of growth parameters using the Gompertz model when (1) data was collected infrequently, (2) data collection was truncated, and (3) data was missing. Initial growth rate and rate of decay were reduced by half when the model was fitted to data collected biweekly compared to data collected weekly. This reduction led to an increase in age of maximum growth and subsequently over-predicted the asymptotic body weight. When only part of the growth duration was used for prediction, both the initial growth rate and rate of decay were reduced. The degree of data truncation also affected sexual dimorphism of the parameters estimated. Using pre-asymptotic data for growth parameter prediction does not allow the intrinsic efficiency of growth to be determined accurately. However, using growth data with body weights missing at different phases of the growth curve does not seem to significantly affect the predicted growth parameters. Speculative or diagnostic conclusions on intrinsic growth should be done with data collected at short intervals to avoid potential inaccuracies in the prediction of initial growth rate, exponential decay rate, age of maximum growth and asymptotic weight.

  16. A fully covariant information-theoretic ultraviolet cutoff for scalar fields in expanding Friedmann Robertson Walker spacetimes

    NASA Astrophysics Data System (ADS)

    Kempf, A.; Chatwin-Davies, A.; Martin, R. T. W.

    2013-02-01

    While a natural ultraviolet cutoff, presumably at the Planck length, is widely assumed to exist in nature, it is nontrivial to implement a minimum length scale covariantly. This is because the presence of a fixed minimum length needs to be reconciled with the ability of Lorentz transformations to contract lengths. In this paper, we implement a fully covariant Planck scale cutoff by cutting off the spectrum of the d'Alembertian. In this scenario, consistent with Lorentz contractions, wavelengths that are arbitrarily smaller than the Planck length continue to exist. However, the dynamics of modes of wavelengths that are significantly smaller than the Planck length possess a very small bandwidth. This has the effect of freezing the dynamics of such modes. While both wavelengths and bandwidths are frame dependent, Lorentz contraction and time dilation conspire to make the freezing of modes of trans-Planckian wavelengths covariant. In particular, we show that this ultraviolet cutoff can be implemented covariantly also in curved spacetimes. We focus on Friedmann Robertson Walker spacetimes and their much-discussed trans-Planckian question: The physical wavelength of each comoving mode was smaller than the Planck scale at sufficiently early times. What was the mode's dynamics then? Here, we show that in the presence of the covariant UV cutoff, the dynamical bandwidth of a comoving mode is essentially zero up until its physical wavelength starts exceeding the Planck length. In particular, we show that under general assumptions, the number of dynamical degrees of freedom of each comoving mode all the way up to some arbitrary finite time is actually finite. Our results also open the way to calculating the impact of this natural UV cutoff on inflationary predictions for the cosmic microwave background.

  17. Predicting top-L missing links with node and link clustering information in large-scale networks

    NASA Astrophysics Data System (ADS)

    Wu, Zhihao; Lin, Youfang; Wan, Huaiyu; Jamil, Waleed

    2016-08-01

    Networks are mathematical structures that are universally used to describe a large variety of complex systems, such as social, biological, and technological systems. The prediction of missing links in incomplete complex networks aims to estimate the likelihood of the existence of a link between a pair of nodes. Various topological features of networks have been applied to develop link prediction methods. However, the exploration of features of links is still limited. In this paper, we demonstrate the power of node and link clustering information in predicting top -L missing links. In the existing literature, link prediction algorithms have only been tested on small-scale and middle-scale networks. The network scale factor has not attracted the same level of attention. In our experiments, we test the proposed method on three groups of networks. For small-scale networks, since the structures are not very complex, advanced methods cannot perform significantly better than classical methods. For middle-scale networks, the proposed index, combining both node and link clustering information, starts to demonstrate its advantages. In many networks, combining both node and link clustering information can improve the link prediction accuracy a great deal. Large-scale networks with more than 100 000 links have rarely been tested previously. Our experiments on three large-scale networks show that local clustering information based methods outperform other methods, and link clustering information can further improve the accuracy of node clustering information based methods, in particular for networks with a broad distribution of the link clustering coefficient.

  18. Principled Missing Data Treatments.

    PubMed

    Lang, Kyle M; Little, Todd D

    2016-04-04

    We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.

  19. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY... 1240—Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) A. If State-submitted seat belt use rate information is unavailable or inadequate for both...

  20. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY... 1240—Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) A. If State-submitted seat belt use rate information is unavailable or inadequate for both...

  1. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY... 1240—Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) A. If State-submitted seat belt use rate information is unavailable or inadequate for both...

  2. Spatio-temporal rectification of tower-based eddy-covariance flux measurements for consistently informing process-based models

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Xu, K.; Desai, A. R.; Taylor, J. R.; Kljun, N.; Schneider, D.; Kampe, T. U.; Fox, A. M.

    2013-12-01

    Process-based models, such as land surface models (LSMs), allow insight in the spatio-temporal distribution of stocks and the exchange of nutrients, trace gases etc. among environmental compartments. More recently, LSMs also become capable of assimilating time-series of in-situ reference observations. This enables calibrating the underlying functional relationships to site-specific characteristics, or to constrain the model results after each time-step in an attempt to minimize drift. The spatial resolution of LSMs is typically on the order of 10^2-10^4 km2, which is suitable for linking regional to continental scales and beyond. However, continuous in-situ observations of relevant stock and exchange variables, such as tower-based eddy-covariance (EC) fluxes, represent orders of magnitude smaller spatial scales (10^-6-10^1 km2). During data assimilation, this significant gap in spatial representativeness is typically either neglected, or side-stepped using simple tiling approaches. Moreover, at ';coarse' resolutions, a single LSM evaluation per time-step implies linearity among the underlying functional relationships as well as among the sub-grid land cover fractions. This, however, is not warranted for land-atmosphere exchange processes over more complex terrain. Hence, it is desirable to explicitly consider spatial variability at LSM sub-grid scales. Here we present a procedure that determines from a single EC tower the spatially integrated probability density function (PDF) of the surface-atmosphere exchange for individual land covers. These PDFs allow quantifying the expected value, as well as spatial variability over a target domain, can be assimilated in tiling-capable LSMs, and mitigate linearity assumptions at ';coarse' resolutions. The procedure is based on the extraction and extrapolation of environmental response functions (ERFs), for which a technical-oriented companion poster is submitted. In short, the subsequent steps are: (i) Time

  3. Slide Presentations as Speech Suppressors: When and Why Learners Miss Oral Information

    ERIC Educational Resources Information Center

    Wecker, Christof

    2012-01-01

    The objective of this study was to test whether information presented on slides during presentations is retained at the expense of information presented only orally, and to investigate part of the conditions under which this effect occurs, and how it can be avoided. Such an effect could be expected and explained either as a kind of redundancy…

  4. Missing genetic information in case-control family data with general semi-parametric shared frailty model.

    PubMed

    Graber-Naidich, Anna; Gorfine, Malka; Malone, Kathleen E; Hsu, Li

    2011-04-01

    Case-control family data are now widely used to examine the role of gene-environment interactions in the etiology of complex diseases. In these types of studies, exposure levels are obtained retrospectively and, frequently, information on most risk factors of interest is available on the probands but not on their relatives. In this work we consider correlated failure time data arising from population-based case-control family studies with missing genotypes of relatives. We present a new method for estimating the age-dependent marginalized hazard function. The proposed technique has two major advantages: (1) it is based on the pseudo full likelihood function rather than a pseudo composite likelihood function, which usually suffers from substantial efficiency loss; (2) the cumulative baseline hazard function is estimated using a two-stage estimator instead of an iterative process. We assess the performance of the proposed methodology with simulation studies, and illustrate its utility on a real data example.

  5. Media Education and Information Literacy: Are We Missing Most of the Real Lessons?

    ERIC Educational Resources Information Center

    Duncan, Barry

    1997-01-01

    Discusses cultural issues and implications of media education and information literacy. Presents examples of the social impact of new technologies. Outlines insights from research on audience research on the effects of media. Lists Les Browns' the "Seven Deadly Sins of the Digital Age." (AEF)

  6. The Impact of Information and Communication Technology on Education: The Missing Discourse between Three Different Paradigms

    ERIC Educational Resources Information Center

    Aviram, Aharon; Talmi, Deborah

    2005-01-01

    Using a new methodological tool, the authors analyzed a large number of texts on information and communication technology (ICT) and education, and identified three clusters of views that guide educationists "in the field" and in more academic contexts. The clusters reflect different fundamental assumptions on ICT and education. The authors argue…

  7. Working with Missing Values

    ERIC Educational Resources Information Center

    Acock, Alan C.

    2005-01-01

    Less than optimum strategies for missing values can produce biased estimates, distorted statistical power, and invalid conclusions. After reviewing traditional approaches (listwise, pairwise, and mean substitution), selected alternatives are covered including single imputation, multiple imputation, and full information maximum likelihood…

  8. What's missing? Discussing stem cell translational research in educational information on stem cell "tourism".

    PubMed

    Master, Zubin; Zarzeczny, Amy; Rachul, Christen; Caulfield, Timothy

    2013-01-01

    Stem cell tourism is a growing industry in which patients pursue unproven stem cell therapies for a wide variety of illnesses and conditions. It is a challenging market to regulate due to a number of factors including its international, online, direct-to-consumer approach. Calls to provide education and information to patients, their families, physicians, and the general public about the risks associated with stem cell tourism are mounting. Initial studies examining the perceptions of patients who have pursued stem cell tourism indicate many are highly critical of the research and regulatory systems in their home countries and believe them to be stagnant and unresponsive to patient needs. We suggest that educational material should include an explanation of the translational research process, in addition to other aspects of stem cell tourism, as one means to help promote greater understanding and, ideally, curb patient demand for unproven stem cell interventions. The material provided must stress that strong scientific research is required in order for therapies to be safe and have a greater chance at being effective. Through an analysis of educational material on stem cell tourism and translational stem cell research from patient groups and scientific societies, we describe essential elements that should be conveyed in educational material provided to patients. Although we support the broad dissemination of educational material on stem cell translational research, we also acknowledge that education may simply not be enough to engender patient and public trust in domestic research and regulatory systems. However, promoting patient autonomy by providing good quality information to patients so they can make better informed decisions is valuable in itself, irrespective of whether it serves as an effective deterrent of stem cell tourism.

  9. Case reports describing treatments in the emergency medicine literature: missing and misleading information

    PubMed Central

    Richason, Tiffany P; Paulson, Stephen M; Lowenstein, Steven R; Heard, Kennon J

    2009-01-01

    Background Although randomized trials and systematic reviews provide the "best evidence" for guiding medical practice, many emergency medicine journals still publish case reports (CRs). The quality of the reporting in these publications has not been assessed. Objectives In this study we sought to determine the proportion of treatment-related case reports that adequately reported information about the patient, disease, interventions, co-interventions, outcomes and other critical information. Methods We identified CRs published in 4 emergency medicine journals in 2000–2005 and categorized them according to their purpose (disease description, overdose or adverse drug reactioin, diagnostic test or treatment effect). Treatment-related CRs were reviewed for the presence or absence of 11 reporting elements. Results All told, 1,316 CRs were identified; of these, 85 (6.5%; 95CI = 66, 84) were about medical or surgical treatments. Most contained adequate descriptions of the patient (99%; 95CI = 95, 100), the stage and severity of the patient's disease (88%; 95CI = 79, 93), the intervention (80%; 95CI = 70, 87) and the outcomes of treatment (90%; 95CI = 82, 95). Fewer CRs reported the patient's co-morbidities (45%; 95CI = 35, 56), concurrent medications (30%; 95CI = 21, 40) or co-interventions (57%; 95CI = 46, 67) or mentioned any possible treatment side-effects (33%; 95CI = 24, 44). Only 37% (95CI = 19, 38) discussed alternative explanations for favorable outcomes. Generalizability of treatment effects to other patients was mentioned in only 29% (95CI = 20, 39). Just 2 CRs (2.3%; 95CI = 1, 8) reported a 'denominator" (number of patients subjected to the same intervention, whether or not successful. Conclusion Treatment-related CRs in emergency medicine journals often omit critical details about treatments, co-interventions, outcomes, generalizability, causality and denominators. As a result, the information may be misleading to providers, and the clinical applications may

  10. A Note on the Use of Missing Auxiliary Variables in Full Information Maximum Likelihood-Based Structural Equation Models

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2008-01-01

    Recent missing data studies have argued in favor of an "inclusive analytic strategy" that incorporates auxiliary variables into the estimation routine, and Graham (2003) outlined methods for incorporating auxiliary variables into structural equation analyses. In practice, the auxiliary variables often have missing values, so it is reasonable to…

  11. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.

  12. Accounting for interactions and complex inter-subject dependency in estimating treatment effect in cluster randomized trials with missing outcomes

    PubMed Central

    Prague, Melanie; Wang, Rui; Stephens, Alisa; Tchetgen Tchetgen, Eric; DeGruttola, Victor

    2016-01-01

    Summary Semi-parametric methods are often used for the estimation of intervention effects on correlated outcomes in cluster-randomized trials (CRTs). When outcomes are missing at random (MAR), Inverse Probability Weighted (IPW) methods incorporating baseline covariates can be used to deal with informative missingness. Also, augmented generalized estimating equations (AUG) correct for imbalance in baseline covariates but need to be extended for MAR outcomes. However, in the presence of interactions between treatment and baseline covariates, neither method alone produces consistent estimates for the marginal treatment effect if the model for interaction is not correctly specified. We propose an AUG-IPW estimator that weights by the inverse of the probability of being a complete case and allows different outcome models in each intervention arm. This estimator is doubly robust (DR), it gives correct estimates whether the missing data process or the outcome model is correctly specified. We consider the problem of covariate interference which arises when the outcome of an individual may depend on covariates of other individuals. When interfering covariates are not modeled, the DR property prevents bias as long as covariate interference is not present simultaneously for the outcome and the missingness. An R package is developed implementing the proposed method. An extensive simulation study and an application to a CRT of HIV risk reduction-intervention in South Africa illustrate the method. PMID:27060877

  13. Covariance Models for Hydrological Applications

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2014-05-01

    a new class of generalized Gibbs random fields, IEEE Transactions on Information Theory, 53(12), 4667 - 4679. [2] D. T. Hristopulos and M. Zukovic, 2011. Relationships between correlation lengths and integral scales for covariance models with more than two parameters, Stochastic Environmental Research and Risk Assessment, 25(1), 11-19. [3] D. T. Hristopulos, 2014. Radial Covariance Functions Motivated by Spatial Random Field Models with Local Interactions, arXiv:1401.2823 [math.ST] .

  14. Missing Mechanism Information

    ERIC Educational Resources Information Center

    Tryon, Warren W.

    2009-01-01

    The first recommendation Kazdin made for advancing the psychotherapy research knowledge base, improving patient care, and reducing the gulf between research and practice was to study the mechanisms of therapeutic change. He noted, "The study of mechanisms of change has received the least attention even though understanding mechanisms may well be…

  15. A class of covariate-dependent spatiotemporal covariance functions.

    PubMed

    Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M

    2011-12-01

    In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States.

  16. Model Selection Criteria for Missing-Data Problems Using the EM Algorithm.

    PubMed

    Ibrahim, Joseph G; Zhu, Hongtu; Tang, Niansheng

    2008-12-01

    We consider novel methods for the computation of model selection criteria in missing-data problems based on the output of the EM algorithm. The methodology is very general and can be applied to numerous situations involving incomplete data within an EM framework, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Toward this goal, we develop a class of information criteria for missing-data problems, called IC(H) (,) (Q), which yields the Akaike information criterion and the Bayesian information criterion as special cases. The computation of IC(H) (,) (Q) requires an analytic approximation to a complicated function, called the H-function, along with output from the EM algorithm used in obtaining maximum likelihood estimates. The approximation to the H-function leads to a large class of information criteria, called IC(H̃) (() (k) (),) (Q). Theoretical properties of IC(H̃) (() (k) (),) (Q), including consistency, are investigated in detail. To eliminate the analytic approximation to the H-function, a computationally simpler approximation to IC(H) (,) (Q), called IC(Q), is proposed, the computation of which depends solely on the Q-function of the EM algorithm. Advantages and disadvantages of IC(H̃) (() (k) (),) (Q) and IC(Q) are discussed and examined in detail in the context of missing-data problems. Extensive simulations are given to demonstrate the methodology and examine the small-sample and large-sample performance of IC(H̃) (() (k) (),) (Q) and IC(Q) in missing-data problems. An AIDS data set also is presented to illustrate the proposed methodology.

  17. Propensity score analysis with missing data.

    PubMed

    Cham, Heining; West, Stephen G

    2016-09-01

    Propensity score analysis is a method that equates treatment and control groups on a comprehensive set of measured confounders in observational (nonrandomized) studies. A successful propensity score analysis reduces bias in the estimate of the average treatment effect in a nonrandomized study, making the estimate more comparable with that obtained from a randomized experiment. This article reviews and discusses an important practical issue in propensity analysis, in which the baseline covariates (potential confounders) and the outcome have missing values (incompletely observed). We review the statistical theory of propensity score analysis and estimation methods for propensity scores with incompletely observed covariates. Traditional logistic regression and modern machine learning methods (e.g., random forests, generalized boosted modeling) as estimation methods for incompletely observed covariates are reviewed. Balance diagnostics and equating methods for incompletely observed covariates are briefly described. Using an empirical example, the propensity score estimation methods for incompletely observed covariates are illustrated and compared. (PsycINFO Database Record

  18. Help for Finding Missing Children.

    ERIC Educational Resources Information Center

    McCormick, Kathleen

    1984-01-01

    Efforts to locate missing children have expanded from a federal law allowing for entry of information into an F.B.I. computer system to companion bills before Congress for establishing a national missing child clearinghouse and a Justice Department center to help in conducting searches. Private organizations are also involved. (KS)

  19. 'Miss Frances', 'Miss Gail' and 'Miss Sandra' Crapemyrtles

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Agricultural Research Service, United States Department of Agriculture, announces the release to nurserymen of three new crapemyrtle cultivars named 'Miss Gail', 'Miss Frances', and 'Miss Sandra'. ‘Miss Gail’ resulted from a cross-pollination between ‘Catawba’ as the female parent and ‘Arapaho’ ...

  20. Galilean covariant harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Horzela, Andrzej; Kapuscik, Edward

    1993-01-01

    A Galilean covariant approach to classical mechanics of a single particle is described. Within the proposed formalism, all non-covariant force laws defining acting forces which become to be defined covariantly by some differential equations are rejected. Such an approach leads out of the standard classical mechanics and gives an example of non-Newtonian mechanics. It is shown that the exactly solvable linear system of differential equations defining forces contains the Galilean covariant description of harmonic oscillator as its particular case. Additionally, it is demonstrated that in Galilean covariant classical mechanics the validity of the second Newton law of dynamics implies the Hooke law and vice versa. It is shown that the kinetic and total energies transform differently with respect to the Galilean transformations.

  1. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do

  2. One‐stage individual participant data meta‐analysis models: estimation of treatment‐covariate interactions must avoid ecological bias by separating out within‐trial and across‐trial information

    PubMed Central

    Hua, Hairui; Burke, Danielle L.; Crowther, Michael J.; Ensor, Joie; Tudur Smith, Catrin

    2016-01-01

    Stratified medicine utilizes individual‐level covariates that are associated with a differential treatment effect, also known as treatment‐covariate interactions. When multiple trials are available, meta‐analysis is used to help detect true treatment‐covariate interactions by combining their data. Meta‐regression of trial‐level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta‐analyses are preferable to examine interactions utilizing individual‐level information. However, one‐stage IPD models are often wrongly specified, such that interactions are based on amalgamating within‐ and across‐trial information. We compare, through simulations and an applied example, fixed‐effect and random‐effects models for a one‐stage IPD meta‐analysis of time‐to‐event data where the goal is to estimate a treatment‐covariate interaction. We show that it is crucial to centre patient‐level covariates by their mean value in each trial, in order to separate out within‐trial and across‐trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta‐analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is −0.011 (95% CI: −0.019 to −0.003; p = 0.004), and thus highly significant, when amalgamating within‐trial and across‐trial information. However, when separating within‐trial from across‐trial information, the interaction is −0.007 (95% CI: −0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta‐analysts should only use within‐trial information to examine individual predictors of treatment effect and that one‐stage IPD models should separate within‐trial from across‐trial information to avoid ecological bias. © 2016

  3. On the joys of missing data.

    PubMed

    Little, Todd D; Jorgensen, Terrence D; Lang, Kyle M; Moore, E Whitney G

    2014-03-01

    We provide conceptual introductions to missingness mechanisms--missing completely at random, missing at random, and missing not at random--and state-of-the-art methods of handling missing data--full-information maximum likelihood and multiple imputation--followed by a discussion of planned missing designs: Multiform questionnaire protocols, 2-method measurement models, and wave-missing longitudinal designs. We reviewed 80 articles of empirical studies published in the 2012 issues of the Journal of Pediatric Psychology to present a picture of how adequately missing data are currently handled in this field. To illustrate the benefits of using multiple imputation or full-information maximum likelihood and incorporating planned missingness into study designs, we provide example analyses of empirical data gathered using a 3-form planned missing design.

  4. Covariant mutually unbiased bases

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro

    2016-06-01

    The connection between maximal sets of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space and finite phase-space geometries is well known. In this article, we classify MUBs according to their degree of covariance with respect to the natural symmetries of a finite phase-space, which are the group of its affine symplectic transformations. We prove that there exist maximal sets of MUBs that are covariant with respect to the full group only in odd prime-power dimensional spaces, and in this case, their equivalence class is actually unique. Despite this limitation, we show that in dimension 2r covariance can still be achieved by restricting to proper subgroups of the symplectic group, that constitute the finite analogues of the oscillator group. For these subgroups, we explicitly construct the unitary operators yielding the covariance.

  5. Covariant Noncommutative Field Theory

    SciTech Connect

    Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.

    2008-07-02

    The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.

  6. A hierarchical nest survival model integrating incomplete temporally varying covariates

    PubMed Central

    Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  7. A hierarchical nest survival model integrating incomplete temporally varying covariates.

    PubMed

    Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A

    2013-11-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  8. A hierarchical nest survival model integrating incomplete temporally varying covariates

    USGS Publications Warehouse

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  9. Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data.

    PubMed

    Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M

    2006-11-07

    The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We

  10. Covariant Bardeen perturbation formalism

    NASA Astrophysics Data System (ADS)

    Vitenti, S. D. P.; Falciano, F. T.; Pinto-Neto, N.

    2014-05-01

    In a previous work we obtained a set of necessary conditions for the linear approximation in cosmology. Here we discuss the relations of this approach with the so-called covariant perturbations. It is often argued in the literature that one of the main advantages of the covariant approach to describe cosmological perturbations is that the Bardeen formalism is coordinate dependent. In this paper we will reformulate the Bardeen approach in a completely covariant manner. For that, we introduce the notion of pure and mixed tensors, which yields an adequate language to treat both perturbative approaches in a common framework. We then stress that in the referred covariant approach, one necessarily introduces an additional hypersurface choice to the problem. Using our mixed and pure tensors approach, we are able to construct a one-to-one map relating the usual gauge dependence of the Bardeen formalism with the hypersurface dependence inherent to the covariant approach. Finally, through the use of this map, we define full nonlinear tensors that at first order correspond to the three known gauge invariant variables Φ, Ψ and Ξ, which are simultaneously foliation and gauge invariant. We then stress that the use of the proposed mixed tensors allows one to construct simultaneously gauge and hypersurface invariant variables at any order.

  11. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  12. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  13. A simulation-based marginal method for longitudinal data with dropout and mismeasured covariates.

    PubMed

    Yi, Grace Y

    2008-07-01

    Longitudinal data often contain missing observations and error-prone covariates. Extensive attention has been directed to analysis methods to adjust for the bias induced by missing observations. There is relatively little work on investigating the effects of covariate measurement error on estimation of the response parameters, especially on simultaneously accounting for the biases induced by both missing values and mismeasured covariates. It is not clear what the impact of ignoring measurement error is when analyzing longitudinal data with both missing observations and error-prone covariates. In this article, we study the effects of covariate measurement error on estimation of the response parameters for longitudinal studies. We develop an inference method that adjusts for the biases induced by measurement error as well as by missingness. The proposed method does not require the full specification of the distribution of the response vector but only requires modeling its mean and variance structures. Furthermore, the proposed method employs the so-called functional modeling strategy to handle the covariate process, with the distribution of covariates left unspecified. These features, plus the simplicity of implementation, make the proposed method very attractive. In this paper, we establish the asymptotic properties for the resulting estimators. With the proposed method, we conduct sensitivity analyses on a cohort data set arising from the Framingham Heart Study. Simulation studies are carried out to evaluate the impact of ignoring covariate measurement error and to assess the performance of the proposed method.

  14. Covariance Applications with Kiwi

    NASA Astrophysics Data System (ADS)

    Mattoon, C. M.; Brown, D.; Elliott, J. B.

    2012-05-01

    The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL) is developing a new tool, named `Kiwi', that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL) and large-scale Uncertainty Quantification (UQ) studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.

  15. The intraclass covariance matrix.

    PubMed

    Carey, Gregory

    2005-09-01

    Introduced by C.R. Rao in 1945, the intraclass covariance matrix has seen little use in behavioral genetic research, despite the fact that it was developed to deal with family data. Here, I reintroduce this matrix, and outline its estimation and basic properties for data sets on pairs of relatives. The intraclass covariance matrix is appropriate whenever the research design or mathematical model treats the ordering of the members of a pair as random. Because the matrix has only one estimate of a population variance and covariance, both the observed matrix and the residual matrix from a fitted model are easy to inspect visually; there is no need to mentally average homologous statistics. Fitting a model to the intraclass matrix also gives the same log likelihood, likelihood-ratio (LR) chi2, and parameter estimates as fitting that model to the raw data. A major advantage of the intraclass matrix is that only two factors influence the LR chi2--the sampling error in estimating population parameters and the discrepancy between the model and the observed statistics. The more frequently used interclass covariance matrix adds a third factor to the chi2--sampling error of homologous statistics. Because of this, the degrees of freedom for fitting models to an intraclass matrix differ from fitting that model to an interclass matrix. Future research is needed to establish differences in power-if any--between the interclass and the intraclass matrix.

  16. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  17. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd.

  18. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits RAILROAD RETIREMENT BOARD INTERNAL ADMINISTRATION, POLICY AND PROCEDURES USE OF PENALTY MAIL TO ASSIST IN... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about...

  19. Missing semantic annotation in databases. The root cause for data integration and migration problems in information systems.

    PubMed

    Dugas, M

    2014-01-01

    Data integration is a well-known grand challenge in information systems. It is highly relevant in medicine because of the multitude of patient data sources. Semantic annotations of data items regarding concept and value domain, based on comprehensive terminologies can facilitate data integration and migration. Therefore it should be implemented in databases from the very beginning.

  20. Missing the Target: We Need to Focus on Informal Care Rather than Preschool. Evidence Speaks Reports, Vol 1, #19

    ERIC Educational Resources Information Center

    Loeb, Susanna

    2016-01-01

    Despite the widely-recognized benefits of early childhood experiences in formal settings that enrich the social and cognitive environments of children, many children--particularly infants and toddlers--spend their days in unregulated (or very lightly regulated) "informal" childcare settings. Over half of all one- and two-year-olds are…

  1. Missing persons-missing data: the need to collect antemortem dental records of missing persons.

    PubMed

    Blau, Soren; Hill, Anthony; Briggs, Christopher A; Cordner, Stephen M

    2006-03-01

    incorporated into the National Coroners Information System (NCIS) managed, on behalf of Australia's Coroners, by the Victorian Institute of Forensic Medicine. The existence of the NCIS would ensure operational collaboration in the implementation of the system and cost savings to Australian policing agencies involved in missing person inquiries. The implementation of such a database would facilitate timely and efficient reconciliation of clinical and postmortem dental records and have subsequent social and financial benefits.

  2. Missing Funds

    ERIC Educational Resources Information Center

    Hassenpflug, Ann

    2012-01-01

    A high school drama coach informs assistant principal Laura Madison that the money students earned through fund-raising activities seems to have vanished and that the male assistant principal may be involved in the disappearance of the funds. Laura has to determine how to address this situation. She considers her past experiences with problematic…

  3. Covariance based outlier detection with feature selection.

    PubMed

    Zwilling, Chris E; Wang, Michelle Y

    2016-08-01

    The present covariance based outlier detection algorithm selects from a candidate set of feature vectors that are best at identifying outliers. Features extracted from biomedical and health informatics data can be more informative in disease assessment and there are no restrictions on the nature and number of features that can be tested. But an important challenge for an algorithm operating on a set of features is for it to winnow the effective features from the ineffective ones. The powerful algorithm described in this paper leverages covariance information from the time series data to identify features with the highest sensitivity for outlier identification. Empirical results demonstrate the efficacy of the method.

  4. Using Analysis of Covariance (ANCOVA) with Fallible Covariates

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Aguinis, Herman

    2011-01-01

    Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…

  5. Missing the target: including perspectives of women with overweight and obesity to inform stigma‐reduction strategies

    PubMed Central

    Himmelstein, M. S.; Gorin, A. A.; Suh, Y. J.

    2017-01-01

    Summary Objective Pervasive weight stigma and discrimination have led to ongoing calls for efforts to reduce this bias. Despite increasing research on stigma‐reduction strategies, perspectives of individuals who have experienced weight stigma have rarely been included to inform this research. The present study conducted a systematic examination of women with high body weight to assess their perspectives about a broad range of strategies to reduce weight‐based stigma. Methods Women with overweight or obesity (N = 461) completed an online survey in which they evaluated the importance, feasibility and potential impact of 35 stigma‐reduction strategies in diverse settings. Participants (91.5% who reported experiencing weight stigma) also completed self‐report measures assessing experienced and internalized weight stigma. Results Most participants assigned high importance to all stigma‐reduction strategies, with school‐based and healthcare approaches accruing the highest ratings. Adding weight stigma to existing anti‐harassment workplace training was rated as the most impactful and feasible strategy. The family environment was viewed as an important intervention target, regardless of participants' experienced or internalized stigma. Conclusion These findings underscore the importance of including people with stigmatized identities in stigma‐reduction research; their insights provide a necessary and valuable contribution that can inform ways to reduce weight‐based inequities and prioritize such efforts. PMID:28392929

  6. Covariant deformed oscillator algebras

    NASA Technical Reports Server (NTRS)

    Quesne, Christiane

    1995-01-01

    The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.

  7. The Bayesian Covariance Lasso.

    PubMed

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  8. The Bayesian Covariance Lasso

    PubMed Central

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G.

    2012-01-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data. PMID:24551316

  9. Impact of the 235U Covariance Data in Benchmark Calculations

    SciTech Connect

    Leal, Luiz C; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems.

  10. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  11. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  12. What Is Missing in Counseling Research? Reporting Missing Data

    ERIC Educational Resources Information Center

    Sterner, William R.

    2011-01-01

    Missing data have long been problematic in quantitative research. Despite the statistical and methodological advances made over the past 3 decades, counseling researchers fail to provide adequate information on this phenomenon. Interpreting the complex statistical procedures and esoteric language seems to be a contributing factor. An overview of…

  13. Covariance Analysis of Gamma Ray Spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-01

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  14. Covariance analysis of gamma ray spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-15

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  15. Covariant magnetic connection hypersurfaces

    NASA Astrophysics Data System (ADS)

    Pegoraro, F.

    2016-04-01

    > In the single fluid, non-relativistic, ideal magnetohydrodynamic (MHD) plasma description, magnetic field lines play a fundamental role by defining dynamically preserved `magnetic connections' between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D magnetic connection hypersurfaces in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when .

  16. A note on MAR, identifying restrictions, model comparison, and sensitivity analysis in pattern mixture models with and without covariates for incomplete data.

    PubMed

    Wang, Chenguang; Daniels, Michael J

    2011-09-01

    Pattern mixture modeling is a popular approach for handling incomplete longitudinal data. Such models are not identifiable by construction. Identifying restrictions is one approach to mixture model identification (Little, 1995, Journal of the American Statistical Association 90, 1112-1121; Little and Wang, 1996, Biometrics 52, 98-111; Thijs et al., 2002, Biostatistics 3, 245-265; Kenward, Molenberghs, and Thijs, 2003, Biometrika 90, 53-71; Daniels and Hogan, 2008, in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis) and is a natural starting point for missing not at random sensitivity analysis (Thijs et al., 2002, Biostatistics 3, 245-265; Daniels and Hogan, 2008, in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis). However, when the pattern specific models are multivariate normal, identifying restrictions corresponding to missing at random (MAR) may not exist. Furthermore, identification strategies can be problematic in models with covariates (e.g., baseline covariates with time-invariant coefficients). In this article, we explore conditions necessary for identifying restrictions that result in MAR to exist under a multivariate normality assumption and strategies for identifying sensitivity parameters for sensitivity analysis or for a fully Bayesian analysis with informative priors. In addition, we propose alternative modeling and sensitivity analysis strategies under a less restrictive assumption for the distribution of the observed response data. We adopt the deviance information criterion for model comparison and perform a simulation study to evaluate the performances of the different modeling approaches. We also apply the methods to a longitudinal clinical trial. Problems caused by baseline covariates with time-invariant coefficients are investigated and an alternative identifying restriction based on residuals is proposed as a solution.

  17. A Nonparametric Prior for Simultaneous Covariance Estimation.

    PubMed

    Gaskins, Jeremy T; Daniels, Michael J

    2013-01-01

    In the modeling of longitudinal data from several groups, appropriate handling of the dependence structure is of central importance. Standard methods include specifying a single covariance matrix for all groups or independently estimating the covariance matrix for each group without regard to the others, but when these model assumptions are incorrect, these techniques can lead to biased mean effects or loss of efficiency, respectively. Thus, it is desirable to develop methods to simultaneously estimate the covariance matrix for each group that will borrow strength across groups in a way that is ultimately informed by the data. In addition, for several groups with covariance matrices of even medium dimension, it is difficult to manually select a single best parametric model among the huge number of possibilities given by incorporating structural zeros and/or commonality of individual parameters across groups. In this paper we develop a family of nonparametric priors using the matrix stick-breaking process of Dunson et al. (2008) that seeks to accomplish this task by parameterizing the covariance matrices in terms of the parameters of their modified Cholesky decomposition (Pourahmadi, 1999). We establish some theoretic properties of these priors, examine their effectiveness via a simulation study, and illustrate the priors using data from a longitudinal clinical trial.

  18. The Use of Covariation as a Principle of Causal Analysis

    ERIC Educational Resources Information Center

    Shultz, Thomas R.; Mendelson, Rosyln

    1975-01-01

    This study investigated the use of covariation as a principle of causal analysis in children 3-4, 6-7, and 9-11 years of age. The results indicated that children as young as 3 years were capable of using covariation information in their attributions of simple physical effects. (Author/CS)

  19. Hospital variation in missed nursing care.

    PubMed

    Kalisch, Beatrice J; Tschannen, Dana; Lee, Hyunhwa; Friese, Christopher R

    2011-01-01

    Quality of nursing care across hospitals is variable, and this variation can result in poor patient outcomes. One aspect of quality nursing care is the amount of necessary care that is omitted. This article reports on the extent and type of nursing care missed and the reasons for missed care. The MISSCARE Survey was administered to nursing staff (n = 4086) who provide direct patient care in 10 acute care hospitals. Missed nursing care patterns as well as reasons for missing care (labor resources, material resources, and communication) were common across all hospitals. Job title (ie, registered nurse vs nursing assistant), shift worked, absenteeism, perceived staffing adequacy, and patient work loads were significantly associated with missed care. The data from this study can inform quality improvement efforts to reduce missed nursing care and promote favorable patient outcomes.

  20. OD Covariance in Conjunction Assessment: Introduction and Issues

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.; Duncan, M.

    2015-01-01

    Primary and secondary covariances combined and projected into conjunction plane (plane perpendicular to relative velocity vector at TCA) Primary placed on x-axis at (miss distance, 0) and represented by circle of radius equal to sum of both spacecraft circumscribing radiiZ-axis perpendicular to x-axis in conjunction plane Pc is portion of combined error ellipsoid that falls within the hard-body radius circle

  1. Deriving covariant holographic entanglement

    NASA Astrophysics Data System (ADS)

    Dong, Xi; Lewkowycz, Aitor; Rangamani, Mukund

    2016-11-01

    We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.

  2. Stardust Navigation Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Menon, Premkumar R.

    2000-01-01

    The Stardust spacecraft was launched on February 7, 1999 aboard a Boeing Delta-II rocket. Mission participants include the National Aeronautics and Space Administration (NASA), the Jet Propulsion Laboratory (JPL), Lockheed Martin Astronautics (LMA) and the University of Washington. The primary objective of the mission is to collect in-situ samples of the coma of comet Wild-2 and return those samples to the Earth for analysis. Mission design and operational navigation for Stardust is performed by the Jet Propulsion Laboratory (JPL). This paper will describe the extensive JPL effort in support of the Stardust pre-launch analysis of the orbit determination component of the mission covariance study. A description of the mission and it's trajectory will be provided first, followed by a discussion of the covariance procedure and models. Predicted accuracy's will be examined as they relate to navigation delivery requirements for specific critical events during the mission. Stardust was launched into a heliocentric trajectory in early 1999. It will perform an Earth Gravity Assist (EGA) on January 15, 2001 to acquire an orbit for the eventual rendezvous with comet Wild-2. The spacecraft will fly through the coma (atmosphere) on the dayside of Wild-2 on January 2, 2004. At that time samples will be obtained using an aerogel collector. After the comet encounter Stardust will return to Earth when the Sample Return Capsule (SRC) will separate and land at the Utah Test Site (UTTR) on January 15, 2006. The spacecraft will however be deflected off into a heliocentric orbit. The mission is divided into three phases for the covariance analysis. They are 1) Launch to EGA, 2) EGA to Wild-2 encounter and 3) Wild-2 encounter to Earth reentry. Orbit determination assumptions for each phase are provided. These include estimated and consider parameters and their associated a-priori uncertainties. Major perturbations to the trajectory include 19 deterministic and statistical maneuvers

  3. COVARIANCE ASSISTED SCREENING AND ESTIMATION.

    PubMed

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-11-01

    Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X'X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

  4. COVARIANCE ASSISTED SCREENING AND ESTIMATION

    PubMed Central

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-01-01

    Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X′X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model. PMID:25541567

  5. CERAMIC: Case-Control Association Testing in Samples with Related Individuals, Based on Retrospective Mixed Model Analysis with Adjustment for Covariates

    PubMed Central

    Zhong, Sheng; McPeek, Mary Sara

    2016-01-01

    We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM) approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype), because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D) from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan. PMID:27695091

  6. Missing data and multiple imputation in clinical epidemiological research.

    PubMed

    Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene

    2017-01-01

    Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data.

  7. Missing data and multiple imputation in clinical epidemiological research

    PubMed Central

    Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene

    2017-01-01

    Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data. PMID:28352203

  8. Tests of Homoscedasticity, Normality, and Missing Completely at Random for Incomplete Multivariate Data

    ERIC Educational Resources Information Center

    Jamshidian, Mortaza; Jalal, Siavash

    2010-01-01

    Test of homogeneity of covariances (or homoscedasticity) among several groups has many applications in statistical analysis. In the context of incomplete data analysis, tests of homoscedasticity among groups of cases with identical missing data patterns have been proposed to test whether data are missing completely at random (MCAR). These tests of…

  9. Principled missing data methods for researchers.

    PubMed

    Dong, Yiran; Peng, Chao-Ying Joanne

    2013-12-01

    The impact of missing data on quantitative research can be serious, leading to biased estimates of parameters, loss of information, decreased statistical power, increased standard errors, and weakened generalizability of findings. In this paper, we discussed and demonstrated three principled missing data methods: multiple imputation, full information maximum likelihood, and expectation-maximization algorithm, applied to a real-world data set. Results were contrasted with those obtained from the complete data set and from the listwise deletion method. The relative merits of each method are noted, along with common features they share. The paper concludes with an emphasis on the importance of statistical assumptions, and recommendations for researchers. Quality of research will be enhanced if (a) researchers explicitly acknowledge missing data problems and the conditions under which they occurred, (b) principled methods are employed to handle missing data, and (c) the appropriate treatment of missing data is incorporated into review standards of manuscripts submitted for publication.

  10. Covariant harmonic oscillators: 1973 revisited

    NASA Technical Reports Server (NTRS)

    Noz, M. E.

    1993-01-01

    Using the relativistic harmonic oscillator, a physical basis is given to the phenomenological wave function of Yukawa which is covariant and normalizable. It is shown that this wave function can be interpreted in terms of the unitary irreducible representations of the Poincare group. The transformation properties of these covariant wave functions are also demonstrated.

  11. Covariance hypotheses for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Decell, H. P.; Peters, C.

    1983-01-01

    Two covariance hypotheses are considered for LANDSAT data acquired by sampling fields, one an autoregressive covariance structure and the other the hypothesis of exchangeability. A minimum entropy approximation of the first structure by the second is derived and shown to have desirable properties for incorporation into a mixture density estimation procedure. Results of a rough test of the exchangeability hypothesis are presented.

  12. National Center for Missing and Exploited Children

    MedlinePlus

    ... Team HOPE provides peer and emotional support to families. Contact Us Legal Information DONATE Careers Site Index Copyright © 2016 National Center for Missing & Exploited Children. All rights reserved. This Web site ...

  13. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  14. Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials

    DOE Data Explorer

    The Low-fidelity Covariance Project (Low-Fi) was funded in FY07-08 by DOEÆs Nuclear Criticality Safety Program (NCSP). The project was a collaboration among ANL, BNL, LANL, and ORNL. The motivation for the Low-Fi project stemmed from an imbalance in supply and demand of covariance data. The interest in, and demand for, covariance data has been in a continual uptrend over the past few years. Requirements to understand application-dependent uncertainties in simulated quantities of interest have led to the development of sensitivity / uncertainty and data adjustment software such as TSUNAMI [1] at Oak Ridge. To take full advantage of the capabilities of TSUNAMI requires general availability of covariance data. However, the supply of covariance data has not been able to keep up with the demand. This fact is highlighted by the observation that the recent release of the much-heralded ENDF/B-VII.0 included covariance data for only 26 of the 393 neutron evaluations (which is, in fact, considerably less covariance data than was included in the final ENDF/B-VI release).[Copied from R.C. Little et al., "Low-Fidelity Covariance Project", Nuclear Data Sheets 109 (2008) 2828-2833] The Low-Fi covariance data are now available at the National Nuclear Data Center. They are separate from ENDF/B-VII.0 and the NNDC warns that this information is not approved by CSEWG. NNDC describes the contents of this collection as: "Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn

  15. Estimation methods for marginal and association parameters for longitudinal binary data with nonignorable missing observations.

    PubMed

    Li, Haocheng; Yi, Grace Y

    2013-02-28

    In longitudinal studies, missing observations occur commonly. It has been well known that biased results could be produced if missingness is not properly handled in the analysis. Authors have developed many methods with the focus on either incomplete response or missing covariate observations, but rarely on both. The complexity of modeling and computational difficulty would be the major challenges in handling missingness in both response and covariate variables. In this paper, we develop methods using the pairwise likelihood formulation to handle longitudinal binary data with missing observations present in both response and covariate variables. We propose a unified framework to accommodate various types of missing data patterns. We evaluate the performance of the methods empirically under a variety of circumstances. In particular, we investigate issues on efficiency and robustness. We analyze longitudinal data from the National Population Health Study with the use of our methods.

  16. Likelihood methods for regression models with expensive variables missing by design.

    PubMed

    Zhao, Yang; Lawless, Jerald F; McLeish, Donald L

    2009-02-01

    In some applications involving regression the values of certain variables are missing by design for some individuals. For example, in two-stage studies (Zhao and Lipsitz, 1992), data on "cheaper" variables are collected on a random sample of individuals in stage I, and then "expensive" variables are measured for a subsample of these in stage II. So the "expensive" variables are missing by design at stage I. Both estimating function and likelihood methods have been proposed for cases where either covariates or responses are missing. We extend the semiparametric maximum likelihood (SPML) method for missing covariate problems (e.g. Chen, 2004; Ibrahim et al., 2005; Zhang and Rockette, 2005, 2007) to deal with more general cases where covariates and/or responses are missing by design, and show that profile likelihood ratio tests and interval estimation are easily implemented. Simulation studies are provided to examine the performance of the likelihood methods and to compare their efficiencies with estimating function methods for problems involving (a) a missing covariate and (b) a missing response variable. We illustrate the ease of implementation of SPML and demonstrate its high efficiency.

  17. Hawking radiation and covariant anomalies

    SciTech Connect

    Banerjee, Rabin; Kulkarni, Shailesh

    2008-01-15

    Generalizing the method of Wilczek and collaborators we provide a derivation of Hawking radiation from charged black holes using only covariant gauge and gravitational anomalies. The reliability and universality of the anomaly cancellation approach to Hawking radiation is also discussed.

  18. A New Approach for Nuclear Data Covariance and Sensitivity Generation

    SciTech Connect

    Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.

    2005-05-24

    Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes.

  19. Development of covariance capabilities in EMPIRE code

    SciTech Connect

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  20. RNA sequence analysis using covariance models.

    PubMed Central

    Eddy, S R; Durbin, R

    1994-01-01

    We describe a general approach to several RNA sequence analysis problems using probabilistic models that flexibly describe the secondary structure and primary sequence consensus of an RNA sequence family. We call these models 'covariance models'. A covariance model of tRNA sequences is an extremely sensitive and discriminative tool for searching for additional tRNAs and tRNA-related sequences in sequence databases. A model can be built automatically from an existing sequence alignment. We also describe an algorithm for learning a model and hence a consensus secondary structure from initially unaligned example sequences and no prior structural information. Models trained on unaligned tRNA examples correctly predict tRNA secondary structure and produce high-quality multiple alignments. The approach may be applied to any family of small RNA sequences. Images PMID:8029015

  1. Covariance Matrix Evaluations for Independent Mass Fission Yields

    SciTech Connect

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.

  2. Inverse covariance simplification for efficient uncertainty management

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.; Gutiérrez, J. A.

    2007-11-01

    When it comes to manipulating uncertain knowledge such as noisy observations of physical quantities, one may ask how to do it in a simple way. Processing corrupted signals or images always propagates the uncertainties from the data to the final results, whether these errors are explicitly computed or not. When such error estimates are provided, it is crucial to handle them in such a way that their interpretation, or their use in subsequent processing steps, remain user-friendly and computationally tractable. A few authors follow a Bayesian approach and provide uncertainties as an inverse covariance matrix. Despite its apparent sparsity, this matrix contains many small terms that carry little information. Methods have been developed to select the most significant entries, through the use of information-theoretic tools for instance. One has to find a Gaussian pdf that is close enough to the posterior pdf, and with a small number of non-zero coefficients in the inverse covariance matrix. We propose to restrict the search space to Markovian models (where only neighbors can interact), well-suited to signals or images. The originality of our approach is in conserving the covariances between neighbors while setting to zero the entries of the inverse covariance matrix for all other variables. This fully constrains the solution, and the computation is performed via a fast, alternate minimization scheme involving quadratic forms. The Markovian structure advantageously reduces the complexity of Bayesian updating (where the simplified pdf is used as a prior). Moreover, uncertainties exhibit the same temporal or spatial structure as the data.

  3. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  4. Sensitivity of missing values in classification tree for large sample

    NASA Astrophysics Data System (ADS)

    Hasan, Norsida; Adam, Mohd Bakri; Mustapha, Norwati; Abu Bakar, Mohd Rizam

    2012-05-01

    Missing values either in predictor or in response variables are a very common problem in statistics and data mining. Cases with missing values are often ignored which results in loss of information and possible bias. The objectives of our research were to investigate the sensitivity of missing data in classification tree model for large sample. Data were obtained from one of the high level educational institutions in Malaysia. Students' background data were randomly eliminated and classification tree was used to predict students degree classification. The results showed that for large sample, the structure of the classification tree was sensitive to missing values especially for sample contains more than ten percent missing values.

  5. Covariate-free and Covariate-dependent Reliability.

    PubMed

    Bentler, Peter M

    2016-12-01

    Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.

  6. Levy Matrices and Financial Covariances

    NASA Astrophysics Data System (ADS)

    Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail

    2003-10-01

    In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.

  7. A Simulation Study of Missing Data with Multiple Missing X's

    ERIC Educational Resources Information Center

    Rubright, Jonathan D.; Nandakumar, Ratna; Glutting, Joseph J.

    2014-01-01

    When exploring missing data techniques in a realistic scenario, the current literature is limited: most studies only consider consequences with data missing on a single variable. This simulation study compares the relative bias of two commonly used missing data techniques when data are missing on more than one variable. Factors varied include type…

  8. Restoration of HST images with missing data

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Missing data are a fairly common problem when restoring Hubble Space Telescope observations of extended sources. On Wide Field and Planetary Camera images cosmic ray hits and CCD hot spots are the prevalent causes of data losses, whereas on Faint Object Camera images data are lossed due to reseaux marks, blemishes, areas of saturation and the omnipresent frame edges. This contribution discusses a technique for 'filling in' missing data by statistical inference using information from the surrounding pixels. The major gain consists in minimizing adverse spill-over effects to the restoration in areas neighboring those where data are missing. When the mask delineating the support of 'missing data' is made dynamic, cosmic ray hits, etc. can be detected on the fly during restoration.

  9. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    SciTech Connect

    Pichara, Karim; Protopapas, Pavlos

    2013-11-10

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  10. Methods for Addressing Missing Data in Psychiatric and Developmental Research

    ERIC Educational Resources Information Center

    Croy, Calvin D.; Novins, Douglas K.

    2005-01-01

    Objective: First, to provide information about best practices in handling missing data so that readers can judge the quality of research studies. Second, to provide more detailed information about missing data analysis techniques and software on the Journal's Web site at www.jaacap.com. Method: We focus our review of techniques on those that are…

  11. Covariation Neglect among Novice Investors

    ERIC Educational Resources Information Center

    Hedesstrom, Ted Martin; Svedsater, Henrik; Garling, Tommy

    2006-01-01

    In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns…

  12. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  13. Robust parametric indirect estimates of the expected cost of a hospital stay with covariates and censored data.

    PubMed

    Locatelli, Isabella; Marazzi, Alfio

    2013-06-30

    We consider the problem of estimating the mean hospital cost of stays of a class of patients (e.g., a diagnosis-related group) as a function of patient characteristics. The statistical analysis is complicated by the asymmetry of the cost distribution, the possibility of censoring on the cost variable, and the occurrence of outliers. These problems have often been treated separately in the literature, and a method offering a joint solution to all of them is still missing. Indirect procedures have been proposed, combining an estimate of the duration distribution with an estimate of the conditional cost for a given duration. We propose a parametric version of this approach, allowing for asymmetry and censoring in the cost distribution and providing a mean cost estimator that is robust in the presence of extreme values. In addition, the new method takes covariate information into account.

  14. Rats (Rattus norvegicus) flexibly retrieve objects' non-spatial and spatial information from their visuospatial working memory: effects of integrated and separate processing of these features in a missing-object recognition task.

    PubMed

    Keshen, Corrine; Cohen, Jerome

    2016-01-01

    After being trained to find a previous missing object within an array of four different objects, rats received occasional probe trials with such test arrays rotated from that of their respective three-object study arrays. Only animals exposed to each object's non-spatial features consistently paired with both its spatial features (feeder's relative orientation and direction) in the first experiment or with only feeder's relative orientation in the second experiment (Fixed Configuration groups) were adversely affected by probe trial test array rotations. This effect, however, was less persistent for this group in the second experiment but re-emerged when objects' non-spatial features were later rendered uninformative. Animals that had both types of each object's features randomly paired over trials but not between a trial's study and test array (Varied Configuration groups) were not adversely affected on probe trials but improved their missing-object recognition in the first experiment. These findings suggest that the Fixed Configuration groups had integrated each object's non-spatial with both (in Experiment 1) or one (in Experiment 2) of its spatial features to construct a single representation that they could not easily compare to any object in a rotated probe test array. The Varied Configuration groups must maintain separate representations of each object's features to solve this task. This prevented them from exhibiting such adverse effects on rotated probe trial test arrays but enhanced the rats' missing-object recognition in the first experiment. We discussed how rats' flexible use (retrieval) of encoded information from their visuospatial working memory corresponds to that of humans' visuospatial memory in object change detection and complex object recognition tasks. We also discussed how foraging-specific factors may have influenced each group's performance in this task.

  15. Best practices for missing data management in counseling psychology.

    PubMed

    Schlomer, Gabriel L; Bauman, Sheri; Card, Noel A

    2010-01-01

    This article urges counseling psychology researchers to recognize and report how missing data are handled, because consumers of research cannot accurately interpret findings without knowing the amount and pattern of missing data or the strategies that were used to handle those data. Patterns of missing data are reviewed, and some of the common strategies for dealing with them are described. The authors provide an illustration in which data were simulated and evaluate 3 methods of handling missing data: mean substitution, multiple imputation, and full information maximum likelihood. Results suggest that mean substitution is a poor method for handling missing data, whereas both multiple imputation and full information maximum likelihood are recommended alternatives to this approach. The authors suggest that researchers fully consider and report the amount and pattern of missing data and the strategy for handling those data in counseling psychology research and that editors advise researchers of this expectation.

  16. New capabilities for processing covariance data in resonance region

    SciTech Connect

    Wiarda, D.; Dunn, M. E.; Greene, N. M.; Larson, N. M.; Leal, L. C.

    2006-07-01

    The AMPX [1] code system is a modular system of FORTRAN computer programs that relate to nuclear analysis with a primary emphasis on tasks associated with the production and use of multi group and continuous energy cross sections. The module PUFF-III within this code system handles the creation of multi group covariance data from ENDF information. The resulting covariances are saved in COVERX format [2]. We recently expanded the capabilities of PUFF-III to include full handling of covariance data in the resonance region (resolved as well as unresolved). The new program handles all resonance covariance formats in File 32 except for the long-range covariance sub sections. The new program has been named PUFF-IV. To our knowledge, PUFF-IV is the first processing code that can address both the new ENDF format for resolved resonance parameters and the new ENDF 'compact' covariance format. The existing code base was rewritten in Fortran 90 to allow for a more modular design. Results are identical between the new and old versions within rounding errors, where applicable. Automatic test cases have been added to ensure that consistent results are generated across computer systems. (authors)

  17. [Clinical research XIX. From clinical judgment to analysis of covariance].

    PubMed

    Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Moreno, Jorge; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2014-01-01

    The analysis of covariance (ANCOVA) is based on the general linear models. This technique involves a regression model, often multiple, in which the outcome is presented as a continuous variable, the independent variables are qualitative or are introduced into the model as dummy or dichotomous variables, and factors for which adjustment is required (covariates) can be in any measurement level (i.e. nominal, ordinal or continuous). The maneuvers can be entered into the model as 1) fixed effects, or 2) random effects. The difference between fixed effects and random effects depends on the type of information we want from the analysis of the effects. ANCOVA effect separates the independent variables from the effect of co-variables, i.e., corrects the dependent variable eliminating the influence of covariates, given that these variables change in conjunction with maneuvers or treatments, affecting the outcome variable. ANCOVA should be done only if it meets three assumptions: 1) the relationship between the covariate and the outcome is linear, 2) there is homogeneity of slopes, and 3) the covariate and the independent variable are independent from each other.

  18. Gaussian covariance matrices for anisotropic galaxy clustering measurements

    NASA Astrophysics Data System (ADS)

    Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio

    2016-04-01

    Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.

  19. Realization of the optimal phase-covariant quantum cloning machine

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2005-12-15

    In several quantum information (QI) phenomena of large technological importance the information is carried by the phase of the quantum superposition states, or qubits. The phase-covariant cloning machine (PQCM) addresses precisely the problem of optimally copying these qubits with the largest attainable 'fidelity'. We present a general scheme which realizes the 1{yields}3 phase covariant cloning process by a combination of three different QI processes: the universal cloning, the NOT gate, and the projection over the symmetric subspace of the output qubits. The experimental implementation of a PQCM for polarization encoded qubits, the first ever realized with photons, is reported.

  20. Missing proofs found.

    SciTech Connect

    Fitelson, B.; Wos, L.; Mathematics and Computer Science; Univ. of Wisconsin

    2001-01-01

    For close to a century, despite the efforts of fine minds that include Hilbert and Ackermann, Tarski and Bernays, Lukasiewicz, and Rose and Rosser, various proofs of a number of significant theorems have remained missing -- at least not reported in the literature -- amply demonstrating the depth of the corresponding problems. The types of such missing proofs are indeed diverse. For one example, a result may be guaranteed provable because of being valid, and yet no proof has been found. For a second example, a theorem may have been proved via metaargument, but the desired axiomatic proof based solely on the use of a given inference rule may have eluded the experts. For a third example, a theorem may have been announced by a master, but no proof was supplied. The finding of missing proofs of the cited types, as well as of other types, is the focus of this article. The means to finding such proofs rests with heavy use of McCune's automated reasoning program OTTER, reliance on a variety of powerful strategies this program offers, and employment of diverse methodologies. Here we present some of our successes and, because it may prove useful for circuit design and program synthesis as well as in the context of mathematics and logic, detail our approach to finding missing proofs. Well-defined and unmet challenges are included.

  1. Missed Diagnosis of Syrinx

    PubMed Central

    Oh, Chang Hyun; Kim, Chan Gyu; Lee, Jae-Hwan; Park, Hyeong-Chun; Park, Chong Oon

    2012-01-01

    Study Design Prospective, randomized, controlled human study. Purpose We checked the proportion of missed syrinx diagnoses among the examinees of the Korean military conscription. Overview of Literature A syrinx is a fluid-filled cavity within the spinal cord or brain stem and causes various neurological symptoms. A syrinx could easily be diagnosed by magnetic resonance image (MRI), but missed diagnoses seldom occur. Methods In this study, we reviewed 103 cases using cervical images, cervical MRI, or whole spine sagittal MRI, and syrinxes was observed in 18 of these cases. A review of medical certificates or interviews was conducted, and the proportion of syrinx diagnoses was calculated. Results The proportion of syrinx diagnoses was about 66.7% (12 cases among 18). Missed diagnoses were not the result of the length of the syrinx, but due to the type of image used for the initial diagnosis. Conclusions The missed diagnosis proportion of the syrinx is relatively high, therefore, a more careful imaging review is recommended. PMID:22439081

  2. Missing School Matters

    ERIC Educational Resources Information Center

    Balfanz, Robert

    2016-01-01

    Results of a survey conducted by the Office for Civil Rights show that 6 million public school students (13%) are not attending school regularly. Chronic absenteeism--defined as missing more than 10% of school for any reason--has been negatively linked to many key academic outcomes. Evidence shows that students who exit chronic absentee status can…

  3. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  4. Realistic Covariance Prediction For the Earth Science Constellations

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra

  5. Understanding covariate shift in model performance

    PubMed Central

    McGaughey, Georgia; Walters, W. Patrick; Goldman, Brian

    2016-01-01

    Three (3) different methods (logistic regression, covariate shift and k-NN) were applied to five (5) internal datasets and one (1) external, publically available dataset where covariate shift existed. In all cases, k-NN’s performance was inferior to either logistic regression or covariate shift. Surprisingly, there was no obvious advantage for using covariate shift to reweight the training data in the examined datasets. PMID:27803797

  6. What are the best covariates for developing non-stationary rainfall Intensity-Duration-Frequency relationship?

    NASA Astrophysics Data System (ADS)

    Agilan, V.; Umamahesh, N. V.

    2017-03-01

    Present infrastructure design is primarily based on rainfall Intensity-Duration-Frequency (IDF) curves with so-called stationary assumption. However, in recent years, the extreme precipitation events are increasing due to global climate change and creating non-stationarity in the series. Based on recent theoretical developments in the Extreme Value Theory (EVT), recent studies proposed a methodology for developing non-stationary rainfall IDF curve by incorporating trend in the parameters of the Generalized Extreme Value (GEV) distribution using Time covariate. But, the covariate Time may not be the best covariate and it is important to analyze all possible covariates and find the best covariate to model non-stationarity. In this study, five physical processes, namely, urbanization, local temperature changes, global warming, El Niño-Southern Oscillation (ENSO) cycle and Indian Ocean Dipole (IOD) are used as covariates. Based on these five covariates and their possible combinations, sixty-two non-stationary GEV models are constructed. In addition, two non-stationary GEV models based on Time covariate and one stationary GEV model are also constructed. The best model for each duration rainfall series is chosen based on the corrected Akaike Information Criterion (AICc). From the findings of this study, it is observed that the local processes (i.e., Urbanization, local temperature changes) are the best covariate for short duration rainfall and global processes (i.e., Global warming, ENSO cycle and IOD) are the best covariate for the long duration rainfall of the Hyderabad city, India. Furthermore, the covariate Time is never qualified as the best covariate. In addition, the identified best covariates are further used to develop non-stationary rainfall IDF curves of the Hyderabad city. The proposed methodology can be applied to other situations to develop the non-stationary IDF curves based on the best covariate.

  7. Multiple imputation for an incomplete covariate that is a ratio.

    PubMed

    Morris, Tim P; White, Ian R; Royston, Patrick; Seaman, Shaun R; Wood, Angela M

    2014-01-15

    We are concerned with multiple imputation of the ratio of two variables, which is to be used as a covariate in a regression analysis. If the numerator and denominator are not missing simultaneously, it seems sensible to make use of the observed variable in the imputation model. One such strategy is to impute missing values for the numerator and denominator, or the log-transformed numerator and denominator, and then calculate the ratio of interest; we call this 'passive' imputation. Alternatively, missing ratio values might be imputed directly, with or without the numerator and/or the denominator in the imputation model; we call this 'active' imputation. In two motivating datasets, one involving body mass index as a covariate and the other involving the ratio of total to high-density lipoprotein cholesterol, we assess the sensitivity of results to the choice of imputation model and, as an alternative, explore fully Bayesian joint models for the outcome and incomplete ratio. Fully Bayesian approaches using Winbugs were unusable in both datasets because of computational problems. In our first dataset, multiple imputation results are similar regardless of the imputation model; in the second, results are sensitive to the choice of imputation model. Sensitivity depends strongly on the coefficient of variation of the ratio's denominator. A simulation study demonstrates that passive imputation without transformation is risky because it can lead to downward bias when the coefficient of variation of the ratio's denominator is larger than about 0.1. Active imputation or passive imputation after log-transformation is preferable.

  8. Are Maxwell's equations Lorentz-covariant?

    NASA Astrophysics Data System (ADS)

    Redžić, D. V.

    2017-01-01

    It is stated in many textbooks that Maxwell's equations are manifestly covariant when written down in tensorial form. We recall that tensorial form of Maxwell's equations does not secure their tensorial contents; they become covariant by postulating certain transformation properties of field functions. That fact should be stressed when teaching about the covariance of Maxwell's equations.

  9. Lorentz-covariant dissipative Lagrangian systems

    NASA Technical Reports Server (NTRS)

    Kaufman, A. N.

    1985-01-01

    The concept of dissipative Hamiltonian system is converted to Lorentz-covariant form, with evolution generated jointly by two scalar functionals, the Lagrangian action and the global entropy. A bracket formulation yields the local covariant laws of energy-momentum conservation and of entropy production. The formalism is illustrated by a derivation of the covariant Landau kinetic equation.

  10. Analysis of cross-over studies with missing data.

    PubMed

    Rosenkranz, Gerd K

    2015-08-01

    This paper addresses some aspects of the analysis of cross-over trials with missing or incomplete data. A literature review on the topic reveals that many proposals provide correct results under the missing completely at random assumption while only some consider the more general missing at random situation. It is argued that mixed-effects models have a role in this context to recover some of the missing intra-subject from the inter-subject information, in particular when missingness is ignorable. Eventually, sensitivity analyses to deal with more general missingness mechanisms are presented.

  11. Mardia's Multivariate Kurtosis with Missing Data

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Lambert, Paul L.; Fouladi, Rachel T.

    2004-01-01

    Mardia's measure of multivariate kurtosis has been implemented in many statistical packages commonly used by social scientists. It provides important information on whether a commonly used multivariate procedure is appropriate for inference. Many statistical packages also have options for missing data. However, there is no procedure for applying…

  12. What's Missing? Anti-Racist Sex Education!

    ERIC Educational Resources Information Center

    Whitten, Amanda; Sethna, Christabelle

    2014-01-01

    Contemporary sexual health curricula in Canada include information about sexual diversity and queer identities, but what remains missing is any explicit discussion of anti-racist sex education. Although there exists federal and provincial support for multiculturalism and anti-racism in schools, contemporary Canadian sex education omits crucial…

  13. Missing Data and Institutional Research

    ERIC Educational Resources Information Center

    Croninger, Robert G.; Douglas, Karen M.

    2005-01-01

    Many do not consider the effect that missing data have on their survey results nor do they know how to handle missing data. This chapter offers strategies for handling item-missing data and provides a practical example of how these strategies may affect results. The chapter concludes with recommendations for preventing and dealing with missing…

  14. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  15. Recurrence Analysis of Eddy Covariance Fluxes

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael

    2015-04-01

    The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.

  16. Covariance Structure Models for Gene Expression Microarray Data

    ERIC Educational Resources Information Center

    Xie, Jun; Bentler, Peter M.

    2003-01-01

    Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…

  17. Neutrality tests for sequences with missing data.

    PubMed

    Ferretti, Luca; Raineri, Emanuele; Ramos-Onsins, Sebastian

    2012-08-01

    Missing data are common in DNA sequences obtained through high-throughput sequencing. Furthermore, samples of low quality or problems in the experimental protocol often cause a loss of data even with traditional sequencing technologies. Here we propose modified estimators of variability and neutrality tests that can be naturally applied to sequences with missing data, without the need to remove bases or individuals from the analysis. Modified statistics include the Watterson estimator θW, Tajima's D, Fay and Wu's H, and HKA. We develop a general framework to take missing data into account in frequency spectrum-based neutrality tests and we derive the exact expression for the variance of these statistics under the neutral model. The neutrality tests proposed here can also be used as summary statistics to describe the information contained in other classes of data like DNA microarrays.

  18. Missing people, migrants, identification and human rights.

    PubMed

    Nuzzolese, E

    2012-11-30

    The increasing volume and complexities of migratory flow has led to a range of problems such as human rights issues, public health, disease and border control, and also the regulatory processes. As result of war or internal conflicts missing person cases and management have to be regarded as a worldwide issue. On the other hand, even in peace, the issue of a missing person is still relevant. In 2007 the Italian Ministry of Interior nominated an extraordinary commissar in order to analyse and assess the total number of unidentified recovered bodies and verify the extent of the phenomena of missing persons, reported as 24,912 people in Italy (updated 31 December 2011). Of these 15,632 persons are of foreigner nationalities and are still missing. The census of the unidentified bodies revealed a total of 832 cases recovered in Italy since the year 1974. These bodies/human remains received a regular autopsy and were buried as 'corpse without name". In Italy judicial autopsy is performed to establish cause of death and identity, but odontology and dental radiology is rarely employed in identification cases. Nevertheless, odontologists can substantiate the identification through the 'biological profile' providing further information that can narrow the search to a smaller number of missing individuals even when no ante mortem dental data are available. The forensic dental community should put greater emphasis on the role of the forensic odontology as a tool for humanitarian action of unidentified individuals and best practise in human identification.

  19. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... GRANTS FOR USE OF SEAT BELTS-ALLOCATIONS BASED ON SEAT BELT USE RATES Pt. 1240, App. B Appendix B to Part.... If State-submitted seat belt use rate information is unavailable or inadequate for both calendar years 1996 and 1997, State seat belt use rates for calendars year 1996 and 1997 will be estimated...

  20. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  1. Bayesian modeling of air pollution health effects with missing exposure data.

    PubMed

    Molitor, John; Molitor, Nuoo-Ting; Jerrett, Michael; McConnell, Rob; Gauderman, Jim; Berhane, Kiros; Thomas, Duncan

    2006-07-01

    The authors propose a new statistical procedure that utilizes measurement error models to estimate missing exposure data in health effects assessment. The method detailed in this paper follows a Bayesian framework that allows estimation of various parameters of the model in the presence of missing covariates in an informative way. The authors apply this methodology to study the effect of household-level long-term air pollution exposures on lung function for subjects from the Southern California Children's Health Study pilot project, conducted in the year 2000. Specifically, they propose techniques to examine the long-term effects of nitrogen dioxide (NO2) exposure on children's lung function for persons living in 11 southern California communities. The effect of nitrogen dioxide exposure on various measures of lung function was examined, but, similar to many air pollution studies, no completely accurate measure of household-level long-term nitrogen dioxide exposure was available. Rather, community-level nitrogen dioxide was measured continuously over many years, but household-level nitrogen dioxide exposure was measured only during two 2-week periods, one period in the summer and one period in the winter. From these incomplete measures, long-term nitrogen dioxide exposure and its effect on health must be inferred. Results show that the method improves estimates when compared with standard frequentist approaches.

  2. Data Covariances from R-Matrix Analyses of Light Nuclei

    SciTech Connect

    Hale, G.M. Paris, M.W.

    2015-01-15

    After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances ({sup 5}He) and with many resonances ({sup 13}C ). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.

  3. Missing data? Plan on it!

    PubMed

    Palmer, Raymond F; Royall, Donald R

    2010-10-01

    Longitudinal study designs are indispensable for investigating age-related functional change. There now are well-established methods for addressing missing data in longitudinal studies. Modern missing data methods not only minimize most problems associated with missing data (e.g., loss of power and biased parameter estimates), but also have valuable new applications such as research designs that use modern missing data methods to plan missing data purposefully. This article describes two state-of-the-art statistical methodologies for addressing missing data in longitudinal research: growth curve analysis and statistical measurement models. How the purposeful planning of missing data in research designs can reduce subject burden, improve data quality and statistical power, and manage costs is then described.

  4. A Two-Stage Approach to Missing Data: Theory and Application to Auxiliary Variables

    ERIC Educational Resources Information Center

    Savalei, Victoria; Bentler, Peter M.

    2009-01-01

    A well-known ad-hoc approach to conducting structural equation modeling with missing data is to obtain a saturated maximum likelihood (ML) estimate of the population covariance matrix and then to use this estimate in the complete data ML fitting function to obtain parameter estimates. This 2-stage (TS) approach is appealing because it minimizes a…

  5. Effectiveness of Four Methods of Handling Missing Data Using Samples from a National Database.

    ERIC Educational Resources Information Center

    Witta, E. Lea

    The effectiveness of four methods of handling missing data in reproducing the target sample covariance matrix and mean vector was tested using three levels of incomplete cases: 30%, 50%, and 70%. Data were selected from the National Education Longitudinal Study (NELS) database. Three levels of sample sizes (500, 1000, and 2000) were used. The…

  6. Comparison of Modern Methods for Analyzing Repeated Measures Data with Missing Values

    ERIC Educational Resources Information Center

    Vallejo, G.; Fernandez, M. P.; Livacic-Rojas, P. E.; Tuero-Herrero, E.

    2011-01-01

    Missing data are a pervasive problem in many psychological applications in the real world. In this article we study the impact of dropout on the operational characteristics of several approaches that can be easily implemented with commercially available software. These approaches include the covariance pattern model based on an unstructured…

  7. Determining Predictors of True HIV Status Using an Errors-in-Variables Model with Missing Data

    ERIC Educational Resources Information Center

    Rindskopf, David; Strauss, Shiela

    2004-01-01

    We demonstrate a model for categorical data that parallels the MIMIC model for continuous data. The model is equivalent to a latent class model with observed covariates; further, it includes simple handling of missing data. The model is used on data from a large-scale study of HIV that had both biological measures of infection and self-report…

  8. The Impact of Missing Data on Sample Reliability Estimates: Implications for Reliability Reporting Practices

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2004-01-01

    A method for incorporating maximum likelihood (ML) estimation into reliability analyses with item-level missing data is outlined. An ML estimate of the covariance matrix is first obtained using the expectation maximization (EM) algorithm, and coefficient alpha is subsequently computed using standard formulae. A simulation study demonstrated that…

  9. Posttraumatic stress disorder: the missed diagnosis.

    PubMed

    Grasso, Damion; Boonsiri, Joseph; Lipschitz, Deborah; Guyer, Amanda; Houshyar, Shadi; Douglas-Palumberi, Heather; Massey, Johari; Kaufman, Joan

    2009-01-01

    Posttraumatic stress disorder (PTSD) is frequently underdiagnosed in maltreated samples. Protective services information is critical for obtaining complete trauma histories and determining whether to survey PTSD symptoms in maltreated children. In the current study, without protective services information to supplement parent and child report, diagnosing PTSD was missed in a significant proportion of the cases. Collaboration between mental health professionals and protective service workers is critical in determining psychiatric diagnoses and treatment needs of children involved with the child welfare system.

  10. Relativistic covariance of Ohm's law

    NASA Astrophysics Data System (ADS)

    Starke, R.; Schober, G. A. H.

    2016-04-01

    The derivation of Lorentz-covariant generalizations of Ohm's law has been a long-term issue in theoretical physics with deep implications for the study of relativistic effects in optical and atomic physics. In this article, we propose an alternative route to this problem, which is motivated by the tremendous progress in first-principles materials physics in general and ab initio electronic structure theory in particular. We start from the most general, Lorentz-covariant first-order response law, which is written in terms of the fundamental response tensor χμ ν relating induced four-currents to external four-potentials. By showing the equivalence of this description to Ohm's law, we prove the validity of Ohm's law in every inertial frame. We further use the universal relation between χμ ν and the microscopic conductivity tensor σkℓ to derive a fully relativistic transformation law for the latter, which includes all effects of anisotropy and relativistic retardation. In the special case of a constant, scalar conductivity, this transformation law can be used to rederive a standard textbook generalization of Ohm's law.

  11. Addressing missing participant outcome data in dental clinical trials.

    PubMed

    Spineli, Loukia M; Fleming, Padhraig S; Pandis, Nikolaos

    2015-06-01

    Missing outcome data are common in clinical trials and despite a well-designed study protocol, some of the randomized participants may leave the trial early without providing any or all of the data, or may be excluded after randomization. Premature discontinuation causes loss of information, potentially resulting in attrition bias leading to problems during interpretation of trial findings. The causes of information loss in a trial, known as mechanisms of missingness, may influence the credibility of the trial results. Analysis of trials with missing outcome data should ideally be handled with intention to treat (ITT) rather than per protocol (PP) analysis. However, true ITT analysis requires appropriate assumptions and imputation of missing data. Using a worked example from a published dental study, we highlight the key issues associated with missing outcome data in clinical trials, describe the most recognized approaches to handling missing outcome data, and explain the principles of ITT and PP analysis.

  12. Patient Portals as a Means of Information and Communication Technology Support to Patient-Centric Care Coordination – the Missing Evidence and the Challenges of Evaluation

    PubMed Central

    Georgiou, Andrew; Hyppönen, Hannele; Ammenwerth, Elske; de Keizer, Nicolette; Magrabi, Farah; Scott, Philip

    2015-01-01

    Summary Objectives To review the potential contribution of Information and Communication Technology (ICT) to enable patient-centric and coordinated care, and in particular to explore the role of patient portals as a developing ICT tool, to assess the available evidence, and to describe the evaluation challenges. Methods Reviews of IMIA, EFMI, and other initiatives, together with literature reviews. Results We present the progression from care coordination to care integration, and from patient-centric to person-centric approaches. We describe the different roles of ICT as an enabler of the effective presentation of information as and when needed. We focus on the patient’s role as a co-producer of health as well as the focus and purpose of care. We discuss the need for changing organisational processes as well as the current mixed evidence regarding patient portals as a logical tool, and the reasons for this dichotomy, together with the evaluation principles supported by theoretical frameworks so as to yield robust evidence. Conclusions There is expressed commitment to coordinated care and to putting the patient in the centre. However to achieve this, new interactive patient portals will be needed to enable peer communication by all stakeholders including patients and professionals. Few portals capable of this exist to date. The evaluation of these portals as enablers of system change, rather than as simple windows into electronic records, is at an early stage and novel evaluation approaches are needed. PMID:26123909

  13. Identifying Heat Waves in Florida: Considerations of Missing Weather Data

    PubMed Central

    Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.

    2015-01-01

    Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198

  14. Computation of transform domain covariance matrices

    NASA Technical Reports Server (NTRS)

    Fino, B. J.; Algazi, V. R.

    1975-01-01

    It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.

  15. Shrinkage approach for EEG covariance matrix estimation.

    PubMed

    Beltrachini, Leandro; von Ellenrieder, Nicolas; Muravchik, Carlos H

    2010-01-01

    We present a shrinkage estimator for the EEG spatial covariance matrix of the background activity. We show that such an estimator has some advantages over the maximum likelihood and sample covariance estimators when the number of available data to carry out the estimation is low. We find sufficient conditions for the consistency of the shrinkage estimators and results concerning their numerical stability. We compare several shrinkage schemes and show how to improve the estimator by incorporating known structure of the covariance matrix.

  16. Expected estimating equations for missing data, measurement error, and misclassification, with application to longitudinal nonignorable missing data.

    PubMed

    Wang, C Y; Huang, Yijian; Chao, Edward C; Jeffcoat, Marjorie K

    2008-03-01

    Missing data, measurement error, and misclassification are three important problems in many research fields, such as epidemiological studies. It is well known that missing data and measurement error in covariates may lead to biased estimation. Misclassification may be considered as a special type of measurement error, for categorical data. Nevertheless, we treat misclassification as a different problem from measurement error because statistical models for them are different. Indeed, in the literature, methods for these three problems were generally proposed separately given that statistical modeling for them are very different. The problem is more challenging in a longitudinal study with nonignorable missing data. In this article, we consider estimation in generalized linear models under these three incomplete data models. We propose a general approach based on expected estimating equations (EEEs) to solve these three incomplete data problems in a unified fashion. This EEE approach can be easily implemented and its asymptotic covariance can be obtained by sandwich estimation. Intensive simulation studies are performed under various incomplete data settings. The proposed method is applied to a longitudinal study of oral bone density in relation to body bone density.

  17. Adding local components to global functions for continuous covariates in multivariable regression modeling.

    PubMed

    Binder, H; Sauerbrei, W

    2010-03-30

    When global techniques, based on fractional polynomials (FPs), are employed for modeling potentially nonlinear effects of several continuous covariates on a response, accessible model equations are obtained. However, local features might be missed. Therefore, a procedure is introduced, which systematically checks model fits, obtained by the multivariable fractional polynomial (MFP) approach, for overlooked local features. Statistically significant local polynomials are then parsimoniously added. This approach, called MFP + L, is seen to result in an effective control of the Type I error with respect to the addition of local components in a small simulation study with univariate and multivariable settings. Prediction performance is compared with that of a penalized regression spline technique. In a setting unfavorable for FPs, the latter outperforms the MFP approach, if there is much information in the data. However, the addition of local features reduces this performance difference. There is only a small detrimental effect in settings where the MFP approach performs better. In an application example with children's respiratory health data, fits from the spline-based approach indicate many local features, but MFP + L adds only few significant features, which seem to have good support in the data. The proposed approach may be expected to be superior in settings with local features, but retains the good properties of the MFP approach in a large number of settings where global functions are sufficient.

  18. Identifying sources of uncertainty using covariance analysis

    NASA Astrophysics Data System (ADS)

    Hyslop, N. P.; White, W. H.

    2010-12-01

    Atmospheric aerosol monitoring often includes performing multiple analyses on a collected sample. Some common analyses resolve suites of elements or compounds (e.g., spectrometry, chromatography). Concentrations are determined through multi-step processes involving sample collection, physical or chemical analysis, and data reduction. Uncertainties in the individual steps propagate into uncertainty in the calculated concentration. The assumption in most treatments of measurement uncertainty is that errors in the various species concentrations measured in a sample are random and therefore independent of each other. This assumption is often not valid in speciated aerosol data because some errors can be common to multiple species. For example, an error in the sample volume will introduce a common error into all species concentrations determined in the sample, and these errors will correlate with each other. Measurement programs often use paired (collocated) measurements to characterize the random uncertainty in their measurements. Suites of paired measurements provide an opportunity to go beyond the characterization of measurement uncertainties in individual species to examine correlations amongst the measurement uncertainties in multiple species. This additional information can be exploited to distinguish sources of uncertainty that affect all species from those that only affect certain subsets or individual species. Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) program are used to illustrate these ideas. Nine analytes commonly detected in the IMPROVE network were selected for this analysis. The errors in these analytes can be reasonably modeled as multiplicative, and the natural log of the ratio of concentrations measured on the two samplers provides an approximation of the error. Figure 1 shows the covariation of these log ratios among the different analytes for one site. Covariance is strongest amongst the dust element (Fe, Ca, and

  19. Lorentz covariant {kappa}-Minkowski spacetime

    SciTech Connect

    DaPbrowski, Ludwik; Godlinski, Michal; Piacitelli, Gherardo

    2010-06-15

    In recent years, different views on the interpretation of Lorentz covariance of noncommuting coordinates have been discussed. By a general procedure, we construct the minimal canonical central covariantization of the {kappa}-Minkowski spacetime. Here, undeformed Lorentz covariance is implemented by unitary operators, in the presence of two dimensionful parameters. We then show that, though the usual {kappa}-Minkowski spacetime is covariant under deformed (or twisted) Lorentz action, the resulting framework is equivalent to taking a noncovariant restriction of the covariantized model. We conclude with some general comments on the approach of deformed covariance.

  20. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method.

  1. Generalized indirect covariance NMR formalism for establishment of multidimensional spin correlations.

    PubMed

    Snyder, David A; Brüschweiler, Rafael

    2009-11-19

    Multidimensional nuclear magnetic resonance (NMR) experiments measure spin-spin correlations, which provide important information about bond connectivities and molecular structure. However, direct observation of certain kinds of correlations can be very time-consuming due to limitations in sensitivity and resolution. Covariance NMR derives correlations between spins via the calculation of a (symmetric) covariance matrix, from which a matrix-square root produces a spectrum with enhanced resolution. Recently, the covariance concept has been adopted to the reconstruction of nonsymmetric spectra from pairs of 2D spectra that have a frequency dimension in common. Since the unsymmetric covariance NMR procedure lacks the matrix-square root step, it does not suppress relay effects and thereby may generate false positive signals due to chemical shift degeneracy. A generalized covariance formalism is presented here that embeds unsymmetric covariance processing within the context of the regular covariance transform. It permits the construction of unsymmetric covariance NMR spectra subjected to arbitrary matrix functions, such as the square root, with improved spectral properties. This formalism extends the domain of covariance NMR to include the reconstruction of nonsymmetric NMR spectra at resolutions or sensitivities that are superior to the ones achievable by direct measurements.

  2. Covariate pharmacokinetic model building in oncology and its potential clinical relevance.

    PubMed

    Joerger, Markus

    2012-03-01

    When modeling pharmacokinetic (PK) data, identifying covariates is important in explaining interindividual variability, and thus increasing the predictive value of the model. Nonlinear mixed-effects modeling with stepwise covariate modeling is frequently used to build structural covariate models, and the most commonly used software-NONMEM-provides estimations for the fixed-effect parameters (e.g., drug clearance), interindividual and residual unidentified random effects. The aim of covariate modeling is not only to find covariates that significantly influence the population PK parameters, but also to provide dosing recommendations for a certain drug under different conditions, e.g., organ dysfunction, combination chemotherapy. A true covariate is usually seen as one that carries unique information on a structural model parameter. Covariate models have improved our understanding of the pharmacology of many anticancer drugs, including busulfan or melphalan that are part of high-dose pretransplant treatments, the antifolate methotrexate whose elimination is strongly dependent on GFR and comedication, the taxanes and tyrosine kinase inhibitors, the latter being subject of cytochrome p450 3A4 (CYP3A4) associated metabolism. The purpose of this review article is to provide a tool to help understand population covariate analysis and their potential implications for the clinic. Accordingly, several population covariate models are listed, and their clinical relevance is discussed. The target audience of this article are clinical oncologists with a special interest in clinical and mathematical pharmacology.

  3. Efficient retrieval of landscape Hessian: Forced optimal covariance adaptive learning

    NASA Astrophysics Data System (ADS)

    Shir, Ofer M.; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel

    2014-06-01

    Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳104). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes.

  4. Efficient retrieval of landscape Hessian: forced optimal covariance adaptive learning.

    PubMed

    Shir, Ofer M; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel

    2014-06-01

    Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳10^{4}). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes.

  5. The Impact of Nonignorable Missing Data on the Inference of Regression Coefficients.

    ERIC Educational Resources Information Center

    Min, Kyung-Seok; Frank, Kenneth A.

    Various statistical methods have been available to deal with missing data problems, but the difficulty is that they are based on somewhat restrictive assumptions that missing patterns are known or can be modeled with auxiliary information. This paper treats the presence of missing cases from the viewpoint that generalization as a sample does not…

  6. 26 CFR 601.901 - Missing children shown on penalty mail.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and biographical information on hundreds of missing children. (b) Procedures for obtaining and disseminating data. (1) The IRS shall publish pictures and biographical data related to missing children in... photographic and biographical materials solely from the National Center for Missing and Exploited...

  7. 26 CFR 601.901 - Missing children shown on penalty mail.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... and biographical information on hundreds of missing children. (b) Procedures for obtaining and disseminating data. (1) The IRS shall publish pictures and biographical data related to missing children in... photographic and biographical materials solely from the National Center for Missing and Exploited...

  8. Covariant constraints on hole-ography

    NASA Astrophysics Data System (ADS)

    Engelhardt, Netta; Fischetti, Sebastian

    2015-10-01

    Hole-ography is a prescription relating the areas of surfaces in an AdS bulk to the differential entropy of a family of intervals in the dual CFT. In (2+1) bulk dimensions, or in higher dimensions when the bulk features a sufficient degree of symmetry, we prove that there are surfaces in the bulk that cannot be completely reconstructed using known hole-ographic approaches, even if extremal surfaces reach them. Such surfaces lie in easily identifiable regions: the interiors of holographic screens. These screens admit a holographic interpretation in terms of the Bousso bound. We speculate that this incompleteness of the reconstruction is a form of coarse-graining, with the missing information associated to the holographic screen. We comment on perturbative quantum extensions of our classical results.

  9. CMB lens sample covariance and consistency relations

    NASA Astrophysics Data System (ADS)

    Motloch, Pavel; Hu, Wayne; Benoit-Lévy, Aurélien

    2017-02-01

    Gravitational lensing information from the two and higher point statistics of the cosmic microwave background (CMB) temperature and polarization fields are intrinsically correlated because they are lensed by the same realization of structure between last scattering and observation. Using an analytic model for lens sample covariance, we show that there is one mode, separately measurable in the lensed CMB power spectra and lensing reconstruction, that carries most of this correlation. Once these measurements become lens sample variance dominated, this mode should provide a useful consistency check between the observables that is largely free of sampling and cosmological parameter errors. Violations of consistency could indicate systematic errors in the data and lens reconstruction or new physics at last scattering, any of which could bias cosmological inferences and delensing for gravitational waves. A second mode provides a weaker consistency check for a spatially flat universe. Our analysis isolates the additional information supplied by lensing in a model-independent manner but is also useful for understanding and forecasting CMB cosmological parameter errors in the extended Λ cold dark matter parameter space of dark energy, curvature, and massive neutrinos. We introduce and test a simple but accurate forecasting technique for this purpose that neither double counts lensing information nor neglects lensing in the observables.

  10. Dealing with deficient and missing data.

    PubMed

    Dohoo, Ian R

    2015-11-01

    Disease control decisions require two types of data: data describing the disease frequency (incidence and prevalence) along with characteristics of the population and environment in which the disease occurs (hereafter called "descriptive data"); and, data for analytical studies (hereafter called "analytical data") documenting the effects of risk factors for the disease. Both may be either deficient or missing. Descriptive data may be completely missing if the disease is a new and unknown entity with no diagnostic procedures or if there has been no surveillance activity in the population of interest. Methods for dealing with this complete absence of data are limited, but the possible use of surrogate measures of disease will be discussed. More often, data are deficient because of limitations in diagnostic capabilities (imperfect sensitivity and specificity). Developments in methods for dealing with this form of information bias make this a more tractable problem. Deficiencies in analytical data leading to biased estimates of effects of risk factors are a common problem, and one which is increasingly being recognized, but options for correction of known or suspected biases are still limited. Data about risk factors may be completely missing if studies of risk factors have not been carried out. Alternatively, data for evaluation of risk factors may be available but have "item missingness" where some (or many) observations have some pieces of information missing. There has been tremendous development in the methods to deal with this problem of "item missingness" over the past decade, with multiple imputation being the most prominent method. The use of multiple imputation to deal with the problem of item missing data will be compared to the use of complete-case analysis, and limitations to the applicability of imputation will be presented.

  11. What Darwin missed

    NASA Astrophysics Data System (ADS)

    Campbell, A. K.

    2003-07-01

    Throughout his life, Fred Hoyle had a keen interest in evolution. He argued that natural selection by small, random change, as conceived by Charles Darwin and Alfred Russel Wallace, could not explain either the origin of life or the origin of a new protein. The idea of natural selection, Hoyle told us, wasn't even Darwin's original idea in the first place. Here, in honour of Hoyle's analysis, I propose a solution to Hoyle's dilemma. His solution was life from space - panspermia. But the real key to understanding natural selection is `molecular biodiversity'. This explains the things Darwin missed - the origin of species and the origin of extinction. It is also a beautiful example of the mystery disease that afflicted Darwin for over 40 years, for which we now have an answer.

  12. The Concept of Missing Incidents in Persons with Dementia

    PubMed Central

    Rowe, Meredeth; Houston, Amy; Molinari, Victor; Bulat, Tatjana; Bowen, Mary Elizabeth; Spring, Heather; Mutolo, Sandra; McKenzie, Barbara

    2015-01-01

    Behavioral symptoms of dementia often present the greatest challenge for informal caregivers. One behavior, that is a constant concern for caregivers, is the person with dementia leaving a designated area such that their whereabouts become unknown to the caregiver or a missing incident. Based on an extensive literature review and published findings of their own research, members of the International Consortium on Wandering and Missing Incidents constructed a preliminary missing incidents model. Examining the evidence base, specific factors within each category of the model were further described, reviewed and modified until consensus was reached regarding the final model. The model begins to explain in particular the variety of antecedents that are related to missing incidents. The model presented in this paper is designed to be heuristic and may be used to stimulate discussion and the development of effective preventative and response strategies for missing incidents among persons with dementia. PMID:27417817

  13. Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    George, E., Chan, K.

    2012-09-01

    Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for

  14. Covariance Structure Analysis of Ordinal Ipsative Data.

    ERIC Educational Resources Information Center

    Chan, Wai; Bentler, Peter M.

    1998-01-01

    Proposes a two-stage estimation method for the analysis of covariance structure models with ordinal ipsative data (OID). A goodness-of-fit statistic is given for testing the hypothesized covariance structure matrix, and simulation results show that the method works well with a large sample. (SLD)

  15. Quality Quantification of Evaluated Cross Section Covariances

    SciTech Connect

    Varet, S.; Dossantos-Uzarralde, P.

    2015-01-15

    Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.

  16. Group Theory of Covariant Harmonic Oscillators

    ERIC Educational Resources Information Center

    Kim, Y. S.; Noz, Marilyn E.

    1978-01-01

    A simple and concrete example for illustrating the properties of noncompact groups is presented. The example is based on the covariant harmonic-oscillator formalism in which the relativistic wave functions carry a covariant-probability interpretation. This can be used in a group theory course for graduate students who have some background in…

  17. Position Error Covariance Matrix Validation and Correction

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  18. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  19. Covariance matrices for use in criticality safety predictability studies

    SciTech Connect

    Derrien, H.; Larson, N.M.; Leal, L.C.

    1997-09-01

    Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.

  20. Treatment decisions based on scalar and functional baseline covariates.

    PubMed

    Ciarleglio, Adam; Petkova, Eva; Ogden, R Todd; Tarpey, Thaddeus

    2015-12-01

    The amount and complexity of patient-level data being collected in randomized-controlled trials offer both opportunities and challenges for developing personalized rules for assigning treatment for a given disease or ailment. For example, trials examining treatments for major depressive disorder are not only collecting typical baseline data such as age, gender, or scores on various tests, but also data that measure the structure and function of the brain such as images from magnetic resonance imaging (MRI), functional MRI (fMRI), or electroencephalography (EEG). These latter types of data have an inherent structure and may be considered as functional data. We propose an approach that uses baseline covariates, both scalars and functions, to aid in the selection of an optimal treatment. In addition to providing information on which treatment should be selected for a new patient, the estimated regime has the potential to provide insight into the relationship between treatment response and the set of baseline covariates. Our approach can be viewed as an extension of "advantage learning" to include both scalar and functional covariates. We describe our method and how to implement it using existing software. Empirical performance of our method is evaluated with simulated data in a variety of settings and also applied to data arising from a study of patients with major depressive disorder from whom baseline scalar covariates as well as functional data from EEG are available.

  1. Empirical Likelihood for Estimating Equations with Nonignorably Missing Data.

    PubMed

    Tang, Niansheng; Zhao, Puying; Zhu, Hongtu

    2014-04-01

    We develop an empirical likelihood (EL) inference on parameters in generalized estimating equations with nonignorably missing response data. We consider an exponential tilting model for the nonignorably missing mechanism, and propose modified estimating equations by imputing missing data through a kernel regression method. We establish some asymptotic properties of the EL estimators of the unknown parameters under different scenarios. With the use of auxiliary information, the EL estimators are statistically more efficient. Simulation studies are used to assess the finite sample performance of our proposed EL estimators. We apply our EL estimators to investigate a data set on earnings obtained from the New York Social Indicators Survey.

  2. Accounting for missing data in end-of-life research.

    PubMed

    Diehr, Paula; Johnson, Laura Lee

    2005-01-01

    End-of-life studies are likely to have missing data because sicker persons are less likely to provide information and because measurements cannot be made after death. Ignoring missing data may result in data that are too favorable, because the sickest persons are effectively dropped from the analysis. In a comparison of two groups, the group with the most deaths and missing data will tend to have the most favorable data, which is not desirable. Results based on only the available data may not be generalizable to the original study population. If most of the missing data are absent because of death, methods that account for the deaths may remove much of the bias. Imputation methods can then be used for the data that are missing for other reasons. An example is presented from a randomized trial involving frail veterans. In that dataset, only two thirds of the subjects had complete data, but 60% of the "missing" data were missing because of death. The available data alone suggested that health improved significantly over time. However, after accounting for the deaths, there was a significant decline in health over time, as had been expected. Imputation of the remaining missing data did not change the results very much. With and without the imputed data, there was never a significant difference between the treatment and control groups, but in two nonrandomized comparisons the method of handling the missing data made a substantive difference. These sensitivity analyses suggest that the main results were not sensitive to the death and missing data, but that some secondary analyses were sensitive to these problems. Similar approaches should be considered in other end-of-life studies.

  3. Sparse estimation of a covariance matrix.

    PubMed

    Bien, Jacob; Tibshirani, Robert J

    2011-12-01

    We suggest a method for estimating a covariance matrix on the basis of a sample of vectors drawn from a multivariate normal distribution. In particular, we penalize the likelihood with a lasso penalty on the entries of the covariance matrix. This penalty plays two important roles: it reduces the effective number of parameters, which is important even when the dimension of the vectors is smaller than the sample size since the number of parameters grows quadratically in the number of variables, and it produces an estimate which is sparse. In contrast to sparse inverse covariance estimation, our method's close relative, the sparsity attained here is in the covariance matrix itself rather than in the inverse matrix. Zeros in the covariance matrix correspond to marginal independencies; thus, our method performs model selection while providing a positive definite estimate of the covariance. The proposed penalized maximum likelihood problem is not convex, so we use a majorize-minimize approach in which we iteratively solve convex approximations to the original nonconvex problem. We discuss tuning parameter selection and demonstrate on a flow-cytometry dataset how our method produces an interpretable graphical display of the relationship between variables. We perform simulations that suggest that simple elementwise thresholding of the empirical covariance matrix is competitive with our method for identifying the sparsity structure. Additionally, we show how our method can be used to solve a previously studied special case in which a desired sparsity pattern is prespecified.

  4. Concordance between criteria for covariate model building.

    PubMed

    Hennig, Stefanie; Karlsson, Mats O

    2014-04-01

    When performing a population pharmacokinetic modelling analysis covariates are often added to the model. Such additions are often justified by improved goodness of fit and/or decreased in unexplained (random) parameter variability. Increased goodness of fit is most commonly measured by the decrease in the objective function value. Parameter variability can be defined as the sum of unexplained (random) and explained (predictable) variability. Increase in magnitude of explained parameter variability could be another possible criterion for judging improvement in the model. The agreement between these three criteria in diagnosing covariate-parameter relationships of different strengths and nature using stochastic simulations and estimations as well as assessing covariate-parameter relationships in four previously published real data examples were explored. Total estimated parameter variability was found to vary with the number of covariates introduced on the parameter. In the simulated examples and two real examples, the parameter variability increased with increasing number of included covariates. For the other real examples parameter variability decreased or did not change systematically with the addition of covariates. The three criteria were highly correlated, with the decrease in unexplained variability being more closely associated with changes in objective function values than increases in explained parameter variability were. The often used assumption that inclusion of covariates in models only shifts unexplained parameter variability to explained parameter variability appears not to be true, which may have implications for modelling decisions.

  5. Subsample ignorable likelihood for accelerated failure time models with missing predictors.

    PubMed

    Zhang, Nanhua; Little, Roderick J

    2015-07-01

    Missing values in predictors are a common problem in survival analysis. In this paper, we review estimation methods for accelerated failure time models with missing predictors, and apply a new method called subsample ignorable likelihood (IL) Little and Zhang (J R Stat Soc 60:591-605, 2011) to this class of models. The approach applies a likelihood-based method to a subsample of observations that are complete on a subset of the covariates, chosen based on assumptions about the missing data mechanism. We give conditions on the missing data mechanism under which the subsample IL method is consistent, while both complete-case analysis and ignorable maximum likelihood are inconsistent. We illustrate the properties of the proposed method by simulation and apply the method to a real dataset.

  6. Computation of the factorized error covariance of the difference between correlated estimators

    NASA Technical Reports Server (NTRS)

    Wolff, Peter J.; Mohan, Srinivas N.; Stienon, Francis M.; Bierman, Gerald J.

    1990-01-01

    A state estimation problem where some of the measurements may be common to two or more data sets is considered. Two approaches for computing the error covariance of the difference between filtered estimates (for each data set) are discussed. The first algorithm is based on postprocessing of the Kalman gain profiles of two correlated estimators. It uses UD factors of the covariance of the relative error. The second algorithm uses a square root information filter applied to relative error analysis. In the absence of process noise, the square root information filter is computationally more efficient and more flexible than the Kalman gain (covariance update) method. Both the algorithms (covariance and information matrix based) are applied to a Venus orbiter simulation, and their performances are compared.

  7. Central subspace dimensionality reduction using covariance operators.

    PubMed

    Kim, Minyoung; Pavlovic, Vladimir

    2011-04-01

    We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.

  8. Covariance Spectroscopy for Fissile Material Detection

    SciTech Connect

    Rusty Trainham, Jim Tinsley, Paul Hurley, Ray Keegan

    2009-06-02

    Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem.

  9. Covariation bias in panic-prone individuals.

    PubMed

    Pauli, P; Montoya, P; Martz, G E

    1996-11-01

    Covariation estimates between fear-relevant (FR; emergency situations) or fear-irrelevant (FI; mushrooms and nudes) stimuli and an aversive outcome (electrical shock) were examined in 10 high-fear (panic-prone) and 10 low-fear respondents. When the relation between slide category and outcome was random (illusory correlation), only high-fear participants markedly overestimated the contingency between FR slides and shocks. However, when there was a high contingency of shocks following FR stimuli (83%) and a low contingency of shocks following FI stimuli (17%), the group difference vanished. Reversal of contingencies back to random induced a covariation bias for FR slides in high- and low-fear respondents. Results indicate that panic-prone respondents show a covariation bias for FR stimuli and that the experience of a high contingency between FR slides and aversive outcomes may foster such a covariation bias even in low-fear respondents.

  10. Conformally covariant parametrizations for relativistic initial data

    NASA Astrophysics Data System (ADS)

    Delay, Erwann

    2017-01-01

    We revisit the Lichnerowicz-York method, and an alternative method of York, in order to obtain some conformally covariant systems. This type of parametrization is certainly more natural for non constant mean curvature initial data.

  11. Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency

    PubMed Central

    2013-01-01

    Background The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by ‘semi-landmarks’ alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. Results We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure (‘ghost points’) can then be used in any further downstream statistical analysis. Conclusions Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points. PMID:23548043

  12. Combining biomarkers for classification with covariate adjustment.

    PubMed

    Kim, Soyoung; Huang, Ying

    2017-03-09

    Combining multiple markers can improve classification accuracy compared with using a single marker. In practice, covariates associated with markers or disease outcome can affect the performance of a biomarker or biomarker combination in the population. The covariate-adjusted receiver operating characteristic (ROC) curve has been proposed as a tool to tease out the covariate effect in the evaluation of a single marker; this curve characterizes the classification accuracy solely because of the marker of interest. However, research on the effect of covariates on the performance of marker combinations and on how to adjust for the covariate effect when combining markers is still lacking. In this article, we examine the effect of covariates on classification performance of linear marker combinations and propose to adjust for covariates in combining markers by maximizing the nonparametric estimate of the area under the covariate-adjusted ROC curve. The proposed method provides a way to estimate the best linear biomarker combination that is robust to risk model assumptions underlying alternative regression-model-based methods. The proposed estimator is shown to be consistent and asymptotically normally distributed. We conduct simulations to evaluate the performance of our estimator in cohort and case/control designs and compare several different weighting strategies during estimation with respect to efficiency. Our estimator is also compared with alternative regression-model-based estimators or estimators that maximize the empirical area under the ROC curve, with respect to bias and efficiency. We apply the proposed method to a biomarker study from an human immunodeficiency virus vaccine trial. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Breeding curvature from extended gauge covariance

    NASA Astrophysics Data System (ADS)

    Aldrovandi, R.

    1991-05-01

    Independence between spacetime and “internal” space in gauge theories is related to the adjoint-covariant behaviour of the gauge potential. The usual gauge scheme is modified to allow a coupling between both spaces. Gauging spacetime translations produce field equations similar to Einstein equations. A curvature-like quantity of mixed differential-algebraic character emerges. Enlarged conservation laws are present, pointing to the presence of an covariance.

  14. Covariate analysis of bivariate survival data

    SciTech Connect

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  15. Noncommutative Gauge Theory with Covariant Star Product

    SciTech Connect

    Zet, G.

    2010-08-04

    We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.

  16. Covariant action for type IIB supergravity

    NASA Astrophysics Data System (ADS)

    Sen, Ashoke

    2016-07-01

    Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.

  17. Phase-covariant quantum cloning of qudits

    SciTech Connect

    Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin

    2003-02-01

    We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation.

  18. Some thoughts on positive definiteness in the consideration of nuclear data covariance matrices

    SciTech Connect

    Geraldo, L.P.; Smith, D.L.

    1988-01-01

    Some basic mathematical features of covariance matrices are reviewed, particularly as they relate to the property of positive difiniteness. Physical implications of positive definiteness are also discussed. Consideration is given to an examination of the origins of non-positive definite matrices, to procedures which encourage the generation of positive definite matrices and to the testing of covariance matrices for positive definiteness. Attention is also given to certain problems associated with the construction of covariance matrices using information which is obtained from evaluated data files recorded in the ENDF format. Examples are provided to illustrate key points pertaining to each of the topic areas covered.

  19. Lorentz covariance of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Rovelli, Carlo; Speziale, Simone

    2011-05-01

    The kinematics of loop gravity can be given a manifestly Lorentz-covariant formulation: the conventional SU(2)-spin-network Hilbert space can be mapped to a space K of SL(2,C) functions, where Lorentz covariance is manifest. K can be described in terms of a certain subset of the projected spin networks studied by Livine, Alexandrov and Dupuis. It is formed by SL(2,C) functions completely determined by their restriction on SU(2). These are square-integrable in the SU(2) scalar product, but not in the SL(2,C) one. Thus, SU(2)-spin-network states can be represented by Lorentz-covariant SL(2,C) functions, as two-component photons can be described in the Lorentz-covariant Gupta-Bleuler formalism. As shown by Wolfgang Wieland in a related paper, this manifestly Lorentz-covariant formulation can also be directly obtained from canonical quantization. We show that the spinfoam dynamics of loop quantum gravity is locally SL(2,C)-invariant in the bulk, and yields states that are precisely in K on the boundary. This clarifies how the SL(2,C) spinfoam formalism yields an SU(2) theory on the boundary. These structures define a tidy Lorentz-covariant formalism for loop gravity.

  20. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  1. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  2. Kettlewell's Missing Evidence.

    ERIC Educational Resources Information Center

    Allchin, Douglas Kellogg

    2002-01-01

    The standard textbook account of Kettlewell and the peppered moths omits significant information. Suggests that this case can be used to reflect on the role of simplification in science teaching. (Author/MM)

  3. Modeling missing data in knowledge space theory.

    PubMed

    de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio

    2015-12-01

    Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data.

  4. Cross-Section Covariance Data Processing with the AMPX Module PUFF-IV

    SciTech Connect

    Wiarda, Dorothea; Leal, Luiz C; Dunn, Michael E

    2011-01-01

    The ENDF community is endeavoring to release an updated version of the ENDF/B-VII library (ENDF/B-VII.1). In the new release several new evaluations containing covariance information have been added, as the community strives to add covariance information for use in programs like the TSUNAMI (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation) sequence of SCALE (Ref 1). The ENDF/B formatted files are processed into libraries to be used in transport calculations using the AMPX code system (Ref 2) or the NJOY code system (Ref 3). Both codes contain modules to process covariance matrices: PUFF-IV for AMPX and ERRORR in the case of NJOY. While the cross section processing capability between the two code systems has been widely compared, the same is not true for the covariance processing. This paper compares the results for the two codes using the pre-release version of ENDF/B-VII.1.

  5. Should multiple imputation be the method of choice for handling missing data in randomized trials?

    PubMed

    Sullivan, Thomas R; White, Ian R; Salter, Amy B; Ryan, Philip; Lee, Katherine J

    2016-01-01

    The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group.

  6. Covariance Modifications to Subspace Bases

    SciTech Connect

    Harris, D B

    2008-11-19

    Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the

  7. Missing gene identification using functional coherence scores

    PubMed Central

    Chitale, Meghana; Khan, Ishita K.; Kihara, Daisuke

    2016-01-01

    Reconstructing metabolic and signaling pathways is an effective way of interpreting a genome sequence. A challenge in a pathway reconstruction is that often genes in a pathway cannot be easily found, reflecting current imperfect information of the target organism. In this work, we developed a new method for finding missing genes, which integrates multiple features, including gene expression, phylogenetic profile, and function association scores. Particularly, for considering function association between candidate genes and neighboring proteins to the target missing gene in the network, we used Co-occurrence Association Score (CAS) and PubMed Association Score (PAS), which are designed for capturing functional coherence of proteins. We showed that adding CAS and PAS substantially improve the accuracy of identifying missing genes in the yeast enzyme-enzyme network compared to the cases when only the conventional features, gene expression, phylogenetic profile, were used. Finally, it was also demonstrated that the accuracy improves by considering indirect neighbors to the target enzyme position in the network using a proper network-topology-based weighting scheme. PMID:27552989

  8. A Comet's Missing Light

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-05-01

    On 28 November 2013, comet C/2012 S1 better known as comet ISON should have passed within two solar radii of the Suns surface as it reached perihelion in its orbit. But instead of shining in extreme ultraviolet (EUV) wavelengths as it grazed the solar surface, the comet was never detected by EUV instruments. What happened to comet ISON?Missing EmissionWhen a sungrazing comet passes through the solar corona, it leaves behind a trail of molecules evaporated from its surface. Some of these molecules emit EUV light, which can be detected by instruments on telescopes like the space-based Solar Dynamics Observatory (SDO).Comet ISON, a comet that arrived from deep space and was predicted to graze the Suns corona in November 2013, was expected to cause EUV emission during its close passage. But analysis of the data from multiple telescopes that tracked ISON in EUV including SDO reveals no sign of it at perihelion.In a recent study, Paul Bryans and DeanPesnell, scientists from NCARs High Altitude Observatory and NASA Goddard Space Flight Center, try to determine why ISON didnt display this expected emission.Comparing ISON and LovejoyIn December 2011, another comet dipped into the Suns corona: comet Lovejoy. This image, showingthe orbit Lovejoy took around the Sun, is a composite of SDO images of the pre- and post-perihelion phases of the orbit. Click for a closer look! The dashed part of the curve represents where Lovejoy passed out of view behind the Sun. [Bryans Pesnell 2016]This is not the first time weve watched a sungrazing comet with EUV-detecting telescopes: Comet Lovejoy passed similarly close to the Sun in December 2011. But when Lovejoy grazed the solar corona, it emitted brightly in EUV. So why didnt ISON? Bryans and Pesnell argue that there are two possibilities:the coronal conditions experienced by the two comets were not similar, orthe two comets themselves were not similar.To establish which factor is the most relevant, the authors first demonstrate that both

  9. Eddy Covariance Method: Overview of General Guidelines and Conventional Workflow

    NASA Astrophysics Data System (ADS)

    Burba, G. G.; Anderson, D. J.; Amen, J. L.

    2007-12-01

    received from new users of the Eddy Covariance method and relevant instrumentation, and employs non-technical language to be of practical use to those new to this field. Information is provided on theory of the method (including state of methodology, basic derivations, practical formulations, major assumptions and sources of errors, error treatment, and use in non- traditional terrains), practical workflow (e.g., experimental design, implementation, data processing, and quality control), alternative methods and applications, and the most frequently overlooked details of the measurements. References and access to an extended 141-page Eddy Covariance Guideline in three electronic formats are also provided.

  10. Recent Advances with the AMPX Covariance Processing Capabilities in PUFF-IV

    SciTech Connect

    Wiarda, D. Arbanas, G.; Leal, L.; Dunn, M.E.

    2008-12-15

    The program PUFF-IV is used to process resonance parameter covariance information given in ENDF/B File 32 and point wise covariance matrices given in ENDF/B File 33 into group-averaged covariances matrices on a user-supplied group structure. For large resonance covariance matrices, found for example in {sup 235}U, the execution time of PUFF-IV can be quite long. Recently the code was modified to take advantage of Basic Linear Algebra Subprograms (BLAS) routines for the most time-consuming matrix multiplications. This led to a substantial decrease in execution time. This faster processing capability allowed us to investigate the conversion of File 32 data into File 33 data using a larger number of user-defined groups. While conversion substantially reduces the ENDF/B file size requirements for evaluations with a large number of resonances, a trade-off is made between the number of groups used to represent the resonance parameter covariance as a point wise covariance matrix and the file size. We are also investigating a hybrid version of the conversion, in which the low-energy part of the File 32 resonance parameter covariances matrix is retained and the correlations with higher energies as well as the high energy part are given in File 33.

  11. Recent Advances with the AMPX Covariance Processing Capabilities in PUFF-IV

    SciTech Connect

    Wiarda, Dorothea; Arbanas, Goran; Leal, Luiz C; Dunn, Michael E

    2008-01-01

    The program PUFF-IV is used to process resonance parameter covariance information given in ENDF/B File 32 and point-wise covariance matrices given in ENDF/B File 33 into group-averaged covariances matrices on a user-supplied group structure. For large resonance covariance matrices, found for example in 235U, the execution time of PUFF-IV can be quite long. Recently the code was modified to take advandage of Basic Linear Algebra Subprograms (BLAS) routines for the most time-consuming matrix multiplications. This led to a substantial decrease in execution time. This faster processing capability allowed us to investigate the conversion of File 32 data into File 33 data using a larger number of user-defined groups. While conversion substantially reduces the ENDF/B file size requirements for evaluations with a large number of resonances, a trade-off is made between the number of groups used to represent the resonance parameter covariance as a point-wise covariance matrix and the file size. We are also investigating a hybrid version of the conversion, in which the low-energy part of the File 32 resonance parameter covariances matrix is retained and the correlations with higher energies as well as the high energy part are given in File 33.

  12. Modeling zero-inflated count data using a covariate-dependent random effect model.

    PubMed

    Wong, Kin-Yau; Lam, K F

    2013-04-15

    In various medical related researches, excessive zeros, which make the standard Poisson regression model inadequate, often exist in count data. We proposed a covariate-dependent random effect model to accommodate the excess zeros and the heterogeneity in the population simultaneously. This work is motivated by a data set from a survey on the dental health status of Hong Kong preschool children where the response variable is the number of decayed, missing, or filled teeth. The random effect has a sound biological interpretation as the overall oral health status or other personal qualities of an individual child that is unobserved and unable to be quantified easily. The overall measure of oral health status, responsible for accommodating the excessive zeros and also the heterogeneity among the children, is covariate dependent. This covariate-dependent random effect model allows one to distinguish whether a potential covariate has an effect on the conceived overall oral health condition of the children, that is, the random effect, or has a direct effect on the magnitude of the counts, or both. We proposed a multiple imputation approach for estimation of the parameters. We discussed the choice of the imputation size. We evaluated the performance of the proposed estimation method through simulation studies, and we applied the model and method to the dental data.

  13. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  14. A sparse Ising model with covariates.

    PubMed

    Cheng, Jie; Levina, Elizaveta; Wang, Pei; Zhu, Ji

    2014-12-01

    There has been a lot of work fitting Ising models to multivariate binary data in order to understand the conditional dependency relationships between the variables. However, additional covariates are frequently recorded together with the binary data, and may influence the dependence relationships. Motivated by such a dataset on genomic instability collected from tumor samples of several types, we propose a sparse covariate dependent Ising model to study both the conditional dependency within the binary data and its relationship with the additional covariates. This results in subject-specific Ising models, where the subject's covariates influence the strength of association between the genes. As in all exploratory data analysis, interpretability of results is important, and we use ℓ1 penalties to induce sparsity in the fitted graphs and in the number of selected covariates. Two algorithms to fit the model are proposed and compared on a set of simulated data, and asymptotic results are established. The results on the tumor dataset and their biological significance are discussed in detail.

  15. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  16. Defining habitat covariates in camera-trap based occupancy studies.

    PubMed

    Niedballa, Jürgen; Sollmann, Rahel; bin Mohamed, Azlan; Bender, Johannes; Wilting, Andreas

    2015-11-24

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10-500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations.

  17. Upper and lower covariance bounds for perturbed linear systems

    NASA Technical Reports Server (NTRS)

    Xu, J.-H.; Skelton, R. E.; Zhu, G.

    1990-01-01

    Both upper and lower bounds are established for state covariance matrices under parameter perturbations of the plant. The motivation for this study lies in the fact that many robustness properties of linear systems are given explicitly in terms of the state covariance matrix. Moreover, there exists a theory for control by covariance assignment. The results provide robustness properties of these covariance controllers.

  18. A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis.

    PubMed

    Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu

    2007-01-01

    Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given sample size, also provides more accurate results than those based on standard asymptotics. But the procedure needs a matrix to play the role of the population covariance matrix. The closer the matrix is to the true population covariance matrix, the more valid the bootstrap inference is. The current paper proposes a class of covariance matrices by combining theory and data. Thus, a proper matrix from this class is closer to the true population covariance matrix than those constructed by any existing methods. Each of the covariance matrices is easy to generate and also satisfies several desired properties. An example with nine cognitive variables and a confirmatory factor model illustrates the details for creating population covariance matrices with different misspecifications. When evaluating the substantive model, bootstrap or simulation procedures based on these matrices will lead to more accurate conclusion than that based on artificial covariance matrices.

  19. Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library

    SciTech Connect

    Oblozinsky,P.; Oblozinsky,P.; Mattoon,C.M.; Herman,M.; Mughabghab,S.F.; Pigni,M.T.; Talou,P.; Hale,G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G

    2009-09-28

    Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10{sup -5} eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: {sup 23}Na and {sup 55}Mn where more detailed evaluation was done; improvements in major structural materials {sup 52}Cr, {sup 56}Fe and {sup 58}Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for {sup 23}Na and {sup 56}Fe. LANL contributed improved covariance data for {sup 235}U and {sup 239}Pu including prompt neutron fission spectra and completely new evaluation for {sup 240}Pu. New R-matrix evaluation for {sup 16}O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.

  20. Mathematics Teachers' Covariational Reasoning Levels and Predictions about Students' Covariational Reasoning Abilities

    ERIC Educational Resources Information Center

    Zeytun, Aysel Sen; Cetinkaya, Bulent; Erbas, Ayhan Kursat

    2010-01-01

    Various studies suggest that covariational reasoning plays an important role on understanding the fundamental ideas of calculus and modeling dynamic functional events. The purpose of this study was to investigate a group of mathematics teachers' covariational reasoning abilities and predictions about their students. Data were collected through…

  1. Structural damage detection based on covariance of covariance matrix with general white noise excitation

    NASA Astrophysics Data System (ADS)

    Hui, Yi; Law, Siu Seong; Ku, Chiu Jen

    2017-02-01

    Covariance of the auto/cross-covariance matrix based method is studied for the damage identification of a structure with illustrations on its advantages and limitations. The original method is extended for structures under direct white noise excitations. The auto/cross-covariance function of the measured acceleration and its corresponding derivatives are formulated analytically, and the method is modified in two new strategies to enable successful identification with much fewer sensors. Numerical examples are adopted to illustrate the improved method, and the effects of sampling frequency and sampling duration are discussed. Results show that the covariance of covariance calculated from responses of higher order modes of a structure play an important role to the accurate identification of local damage in a structure.

  2. Incorporating covariates in skewed functional data models.

    PubMed

    Li, Meng; Staicu, Ana-Maria; Bondell, Howard D

    2015-07-01

    We introduce a class of covariate-adjusted skewed functional models (cSFM) designed for functional data exhibiting location-dependent marginal distributions. We propose a semi-parametric copula model for the pointwise marginal distributions, which are allowed to depend on covariates, and the functional dependence, which is assumed covariate invariant. The proposed cSFM framework provides a unifying platform for pointwise quantile estimation and trajectory prediction. We consider a computationally feasible procedure that handles densely as well as sparsely observed functional data. The methods are examined numerically using simulations and is applied to a new tractography study of multiple sclerosis. Furthermore, the methodology is implemented in the R package cSFM, which is publicly available on CRAN.

  3. FAST NEUTRON COVARIANCES FOR EVALUATED DATA FILES.

    SciTech Connect

    HERMAN, M.; OBLOZINSKY, P.; ROCHMAN, D.; KAWANO, T.; LEAL, L.

    2006-06-05

    We describe implementation of the KALMAN code in the EMPIRE system and present first covariance data generated for Gd and Ir isotopes. A complete set of covariances, in the full energy range, was produced for the chain of 8 Gadolinium isotopes for total, elastic, capture, total inelastic (MT=4), (n,2n), (n,p) and (n,alpha) reactions. Our correlation matrices, based on combination of model calculations and experimental data, are characterized by positive mid-range and negative long-range correlations. They differ from the model-generated covariances that tend to show strong positive long-range correlations and those determined solely from experimental data that result in nearly diagonal matrices. We have studied shapes of correlation matrices obtained in the calculations and interpreted them in terms of the underlying reaction models. An important result of this study is the prediction of narrow energy ranges with extremely small uncertainties for certain reactions (e.g., total and elastic).

  4. Estimated Environmental Exposures for MISSE-3 and MISSE-4

    NASA Technical Reports Server (NTRS)

    Pippin, Gary; Normand, Eugene; Finckenor, Miria

    2008-01-01

    Both modeling techniques and a variety of measurements and observations were used to characterize the environmental conditions experienced by the specimens flown on the MISSE-3 (Materials International Space Station Experiment) and MISSE-4 space flight experiments. On August 3, 2006, astronauts Jeff Williams and Thomas Reiter attached MISSE-3 and -4 to the Quest airlock on ISS, where these experiments were exposed to atomic oxygen (AO), ultraviolet (UV) radiation, particulate radiation, thermal cycling, meteoroid/space debris impact, and the induced environment of an active space station. They had been flown to ISS during the July 2006 STS-121 mission. The two suitcases were oriented so that one side faced the ram direction and one side remained shielded from the atomic oxygen. On August 18,2007, astronauts Clay Anderson and Dave Williams retrieved MISSE-3 and-4 and returned them to Earth at the end of the STS-118 mission. Quantitative values are provided when possible for selected environmental factors. A meteoroid/debris impact survey was performed prior to de-integration at Langley Research Center. AO fluences were calculated based on mass loss and thickness loss of thin polymeric films of known AO reactivity. Radiation was measured with thermoluminescent detectors. Visual inspections under ambient and "black-light" at NASA LaRC, together with optical measurements on selected specimens, were the basis for the initial contamination level assessment.

  5. Filling the missing cone in protein electron crystallography.

    PubMed

    Dorset, D L

    1999-07-15

    The hyper-resolution property of the Sayre equation is explored for extrapolating amplitudes and phases into the missing cone of data left after tilting a representative protein (rubredoxin) to restricted limits in the electron microscope. At 0.6 nm resolution, a reasonable prediction of crystallographic phases can be made to reconstruct the lost information. Best results are obtained if the goniometer tilt value is greater than approximately +/-60 degrees, but some missing information can be restored if the tilt is restricted to +/-45 degrees.

  6. Sparse Covariance Matrix Estimation With Eigenvalue Constraints.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2014-04-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online.

  7. Parametric number covariance in quantum chaotic spectra.

    PubMed

    Vinayak; Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.

  8. Solving the differential biochemical Jacobian from metabolomics covariance data.

    PubMed

    Nägele, Thomas; Mair, Andrea; Sun, Xiaoliang; Fragner, Lena; Teige, Markus; Weckwerth, Wolfram

    2014-01-01

    High-throughput molecular analysis has become an integral part in organismal systems biology. In contrast, due to a missing systematic linkage of the data with functional and predictive theoretical models of the underlying metabolic network the understanding of the resulting complex data sets is lacking far behind. Here, we present a biomathematical method addressing this problem by using metabolomics data for the inverse calculation of a biochemical Jacobian matrix, thereby linking computer-based genome-scale metabolic reconstruction and in vivo metabolic dynamics. The incongruity of metabolome coverage by typical metabolite profiling approaches and genome-scale metabolic reconstruction was solved by the design of superpathways to define a metabolic interaction matrix. A differential biochemical Jacobian was calculated using an approach which links this metabolic interaction matrix and the covariance of metabolomics data satisfying a Lyapunov equation. The predictions of the differential Jacobian from real metabolomic data were found to be correct by testing the corresponding enzymatic activities. Moreover it is demonstrated that the predictions of the biochemical Jacobian matrix allow for the design of parameter optimization strategies for ODE-based kinetic models of the system. The presented concept combines dynamic modelling strategies with large-scale steady state profiling approaches without the explicit knowledge of individual kinetic parameters. In summary, the presented strategy allows for the identification of regulatory key processes in the biochemical network directly from metabolomics data and is a fundamental achievement for the functional interpretation of metabolomics data.

  9. Solving the Differential Biochemical Jacobian from Metabolomics Covariance Data

    PubMed Central

    Nägele, Thomas; Mair, Andrea; Sun, Xiaoliang; Fragner, Lena; Teige, Markus; Weckwerth, Wolfram

    2014-01-01

    High-throughput molecular analysis has become an integral part in organismal systems biology. In contrast, due to a missing systematic linkage of the data with functional and predictive theoretical models of the underlying metabolic network the understanding of the resulting complex data sets is lacking far behind. Here, we present a biomathematical method addressing this problem by using metabolomics data for the inverse calculation of a biochemical Jacobian matrix, thereby linking computer-based genome-scale metabolic reconstruction and in vivo metabolic dynamics. The incongruity of metabolome coverage by typical metabolite profiling approaches and genome-scale metabolic reconstruction was solved by the design of superpathways to define a metabolic interaction matrix. A differential biochemical Jacobian was calculated using an approach which links this metabolic interaction matrix and the covariance of metabolomics data satisfying a Lyapunov equation. The predictions of the differential Jacobian from real metabolomic data were found to be correct by testing the corresponding enzymatic activities. Moreover it is demonstrated that the predictions of the biochemical Jacobian matrix allow for the design of parameter optimization strategies for ODE-based kinetic models of the system. The presented concept combines dynamic modelling strategies with large-scale steady state profiling approaches without the explicit knowledge of individual kinetic parameters. In summary, the presented strategy allows for the identification of regulatory key processes in the biochemical network directly from metabolomics data and is a fundamental achievement for the functional interpretation of metabolomics data. PMID:24695071

  10. Filling in the Missing Links.

    ERIC Educational Resources Information Center

    Kemper, Susan

    1982-01-01

    Describes two experiments where readers were asked to restore missing actions and physical and mental states to short narratives. Although some deletions resulted in violations of the event chain taxonomy while others did not, in both cases readers used knowledge of possible causal sequences to repair gaps in stories. (Author/MES)

  11. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    PubMed

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity.

  12. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization

    PubMed Central

    Brier, Matthew R.; Mitra, Anish; McCarthy, John E.; Ances, Beau M.; Snyder, Abraham Z.

    2015-01-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. PMID:26208872

  13. Missing data: prevalence and reporting practices.

    PubMed

    Bodner, Todd E

    2006-12-01

    Results are described for a survey assessing prevalence of missing data and reporting practices in studies with missing data in a random sample of empirical research journal articles from the PsychINFO database for the year 1999, two years prior to the publication of a special section on missing data in Psychological Methods. Analysis indicates missing data problems were found in about one-third of the studies. Further, analytical methods and reporting practices varied widely for studies with missing data. One may consider these results as baseline data to assess progress as reporting standards evolve for studies with missing data. Some potential reporting standards are discussed.

  14. Economical phase-covariant cloning of qudits

    SciTech Connect

    Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara

    2005-04-01

    We derive the optimal N{yields}M phase-covariant quantum cloning for equatorial states in dimension d with M=kd+N, k integer. The cloning maps are optimal for both global and single-qudit fidelity. The map is achieved by an 'economical' cloning machine, which works without ancilla.

  15. Monitoring: The missing piece

    SciTech Connect

    Bjorkland, Ronald

    2013-11-15

    The U.S. National Environmental Policy Act (NEPA) of 1969 heralded in an era of more robust attention to environmental impacts resulting from larger scale federal projects. The number of other countries that have adopted NEPA's framework is evidence of the appeal of this type of environmental legislation. Mandates to review environmental impacts, identify alternatives, and provide mitigation plans before commencement of the project are at the heart of NEPA. Such project reviews have resulted in the development of a vast number of reports and large volumes of project-specific data that potentially can be used to better understand the components and processes of the natural environment and provide guidance for improved and efficient environmental protection. However, the environmental assessment (EA) or the more robust and intensive environmental impact statement (EIS) that are required for most major projects more frequently than not are developed to satisfy the procedural aspects of the NEPA legislation while they fail to provide the needed guidance for improved decision-making. While NEPA legislation recommends monitoring of project activities, this activity is not mandated, and in those situations where it has been incorporated, the monitoring showed that the EIS was inaccurate in direction and/or magnitude of the impact. Many reviews of NEPA have suggested that monitoring all project phases, from the design through the decommissioning, should be incorporated. Information gathered though a well-developed monitoring program can be managed in databases and benefit not only the specific project but would provide guidance how to better design and implement future activities designed to protect and enhance the natural environment. -- Highlights: • NEPA statutes created profound environmental protection legislative framework. • Contrary to intent, NEPA does not provide for definitive project monitoring. • Robust project monitoring is essential for enhanced

  16. The Board's missing link.

    PubMed

    Montgomery, Cynthia A; Kaufman, Rhonda

    2003-03-01

    If a dam springs several leaks, there are various ways to respond. One could assiduously plug the holes, for instance. Or one could correct the underlying weaknesses, a more sensible approach. When it comes to corporate governance, for too long we have relied on the first approach. But the causes of many governance problems lie well below the surface--specifically, in critical relationships that are not structured to support the players involved. In other words, the very foundation of the system is flawed. And unless we correct the structural problems, surface changes are unlikely to have a lasting impact. When shareholders, management, and the board of directors work together as a system, they provide a powerful set of checks and balances. But the relationship between shareholders and directors is fraught with weaknesses, undermining the entire system's equilibrium. As the authors explain, the exchange of information between these two players is poor. Directors, though elected by shareholders to serve as their agents, aren't individually accountable to the investors. And shareholders--for a variety of reasons--have failed to exert much influence over boards. In the end, directors are left with the Herculean task of faithfully representing shareholders whose preferences are unclear, and shareholders have little say about who represents them and few mechanisms through which to create change. The authors suggest several ways to improve the relationship between shareholders and directors: Increase board accountability by recording individual directors' votes on key corporate resolutions; separate the positions of chairman and CEO; reinvigorate shareholders; and give boards funding to pay for outside experts who can provide perspective on crucial issues.

  17. Statistical analysis with missing exposure data measured by proxy respondents: a misclassification problem within a missing-data problem.

    PubMed

    Shardell, Michelle; Hicks, Gregory E

    2014-11-10

    In studies of older adults, researchers often recruit proxy respondents, such as relatives or caregivers, when study participants cannot provide self-reports (e.g., because of illness). Proxies are usually only sought to report on behalf of participants with missing self-reports; thus, either a participant self-report or proxy report, but not both, is available for each participant. Furthermore, the missing-data mechanism for participant self-reports is not identifiable and may be nonignorable. When exposures are binary and participant self-reports are conceptualized as the gold standard, substituting error-prone proxy reports for missing participant self-reports may produce biased estimates of outcome means. Researchers can handle this data structure by treating the problem as one of misclassification within the stratum of participants with missing self-reports. Most methods for addressing exposure misclassification require validation data, replicate data, or an assumption of nondifferential misclassification; other methods may result in an exposure misclassification model that is incompatible with the analysis model. We propose a model that makes none of the aforementioned requirements and still preserves model compatibility. Two user-specified tuning parameters encode the exposure misclassification model. Two proposed approaches estimate outcome means standardized for (potentially) high-dimensional covariates using multiple imputation followed by propensity score methods. The first method is parametric and uses maximum likelihood to estimate the exposure misclassification model (i.e., the imputation model) and the propensity score model (i.e., the analysis model); the second method is nonparametric and uses boosted classification and regression trees to estimate both models. We apply both methods to a study of elderly hip fracture patients.

  18. A covariance NMR toolbox for MATLAB and OCTAVE.

    PubMed

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.

  19. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    PubMed

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline.

  20. Adjusting for covariate effects on classification accuracy using the covariate-adjusted receiver operating characteristic curve.

    PubMed

    Janes, Holly; Pepe, Margaret S

    2009-06-01

    Recent scientific and technological innovations have produced an abundance of potential markers that are being investigated for their use in disease screening and diagnosis. In evaluating these markers, it is often necessary to account for covariates associated with the marker of interest. Covariates may include subject characteristics, expertise of the test operator, test procedures or aspects of specimen handling. In this paper, we propose the covariate-adjusted receiver operating characteristic curve, a measure of covariate-adjusted classification accuracy. Nonparametric and semiparametric estimators are proposed, asymptotic distribution theory is provided and finite sample performance is investigated. For illustration we characterize the age-adjusted discriminatory accuracy of prostate-specific antigen as a biomarker for prostate cancer.

  1. Background error covariance modelling for convective-scale variational data assimilation

    NASA Astrophysics Data System (ADS)

    Petrie, R. E.

    An essential component in data assimilation is the background error covariance matrix (B). This matrix regularizes the ill-posed data assimilation problem, describes the confidence of the background state and spreads information. Since the B-matrix is too large to represent explicitly it must be modelled. In variational data assimilation it is essentially a climatological approximation of the true covariances. Such a conventional covariance model additionally relies on the imposition of balance conditions. A toy model which is derived from the Euler equations (by making appropriate simplifications and introducing tuneable parameters) is used as a convective-scale system to investigate these issues. Its behaviour is shown to exhibit large-scale geostrophic and hydrostatic balance while permitting small-scale imbalance. A control variable transform (CVT) approach to modelling the B-matrix where the control variables are taken to be the normal modes (NM) of the linearized model is investigated. This approach is attractive for convective-scale covariance modelling as it allows for unbalanced as well as appropriately balanced relationships. Although the NM-CVT is not applied to a data assimilation problem directly, it is shown to be a viable approach to convective-scale covariance modelling. A new mathematically rigorous method to incorporate flow-dependent error covariances with the otherwise static B-matrix estimate is also proposed. This is an extension to the reduced rank Kalman filter (RRKF) where its Hessian singular vector calculation is replaced by an ensemble estimate of the covariances, and is known as the ensemble RRKF (EnRRKF). Ultimately it is hoped that together the NM-CVT and the EnRRKF would improve the predictability of small-scale features in convective-scale weather forecasting through the relaxation of inappropriate balance and the inclusion of flow-dependent covariances.

  2. Stochastic Complexity Based Estimation of Missing Elements in Questionnaire Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; Silander, Tomi

    A new information-theoretically justified approach to missing data estimation for multivariate categorical data was studied. The approach is a model-based imputation procedure relative to a model class (i.e., a functional form for the probability distribution of the complete data matrix), which in this case is the set of multinomial models with…

  3. Missed Opportunities: But a New Century Is Starting.

    ERIC Educational Resources Information Center

    Corn, Anne L.

    1999-01-01

    This article describes critical events that have shaped gifted education, including: closing of one-room schoolhouses, the industrial revolution, the space race, the civil right movement, legislation for special education, growth in technology and information services, educational research, and advocacy. Missed opportunities and future…

  4. Characteristics of HIV patients who missed their scheduled appointments

    PubMed Central

    Nagata, Delsa; Gutierrez, Eliana Battaggia

    2016-01-01

    ABSTRACT OBJECTIVE To analyze whether sociodemographic characteristics, consultations and care in special services are associated with scheduled infectious diseases appointments missed by people living with HIV. METHODS This cross-sectional and analytical study included 3,075 people living with HIV who had at least one scheduled appointment with an infectologist at a specialized health unit in 2007. A secondary data base from the Hospital Management & Information System was used. The outcome variable was missing a scheduled medical appointment. The independent variables were sex, age, appointments in specialized and available disciplines, hospitalizations at the Central Institute of the Clinical Hospital at the Faculdade de Medicina of the Universidade de São Paulo, antiretroviral treatment and change of infectologist. Crude and multiple association analysis were performed among the variables, with a statistical significance of p ≤ 0.05. RESULTS More than a third (38.9%) of the patients missed at least one of their scheduled infectious diseases appointments; 70.0% of the patients were male. The rate of missed appointments was 13.9%, albeit with no observed association between sex and absences. Age was inversely associated to missed appointment. Not undertaking anti-retroviral treatment, having unscheduled infectious diseases consultations or social services care and being hospitalized at the Central Institute were directly associated to missed appointments. CONCLUSIONS The Hospital Management & Information System proved to be a useful tool for developing indicators related to the quality of health care of people living with HIV. Other informational systems, which are often developed for administrative purposes, can also be useful for local and regional management and for evaluating the quality of care provided for patients living with HIV. PMID:26786472

  5. Non-parametric estimation for baseline hazards function and covariate effects with time-dependent covariates.

    PubMed

    Gao, Feng; Manatunga, Amita K; Chen, Shande

    2007-02-20

    Often in many biomedical and epidemiologic studies, estimating hazards function is of interest. The Breslow's estimator is commonly used for estimating the integrated baseline hazard, but this estimator requires the functional form of covariate effects to be correctly specified. It is generally difficult to identify the true functional form of covariate effects in the presence of time-dependent covariates. To provide a complementary method to the traditional proportional hazard model, we propose a tree-type method which enables simultaneously estimating both baseline hazards function and the effects of time-dependent covariates. Our interest will be focused on exploring the potential data structures rather than formal hypothesis testing. The proposed method approximates the baseline hazards and covariate effects with step-functions. The jump points in time and in covariate space are searched via an algorithm based on the improvement of the full log-likelihood function. In contrast to most other estimating methods, the proposed method estimates the hazards function rather than integrated hazards. The method is applied to model the risk of withdrawal in a clinical trial that evaluates the anti-depression treatment in preventing the development of clinical depression. Finally, the performance of the method is evaluated by several simulation studies.

  6. Covariance data for{sup 232}Th in the resolved resonance region from 0 to 4 keV

    SciTech Connect

    Leal, L. C.; Derrien, H.; Arbanas, G.; Larson, N. M.; Wiarda, D.

    2006-07-01

    This paper reports on the generation and testing of the covariance matrix associated with the resonance parameter evaluation for {sup 232}Th up to 4 keV. [1] Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the US Evaluated Nuclear Data Library, ENDF/B. Uncertainty files in the ENDF/B library are obtained from analysis of experimental data and are stored as variance and covariance data. In this paper, we address the generation of covariance data in the resonance region via the computer code SAMMY, which is used in the evaluation of experimental data in the resolved and unresolved resonance energy regions. The resolved resonance parameter covariance matrix for {sup 232}Th, obtained using the retroactive approach, is also presented here. (authors)

  7. Construction of Covariance Functions with Variable Length Fields

    NASA Technical Reports Server (NTRS)

    Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven

    2005-01-01

    This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.

  8. Methods for Mediation Analysis with Missing Data

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  9. A Covariance Generation Methodology for Fission Product Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; Vallet, V.; De Saint Jean, C.; Sumini, M.

    2016-03-01

    Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1) no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM) implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation), developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.

  10. Direct Neutron Capture Calculations with Covariant Density Functional Theory Inputs

    NASA Astrophysics Data System (ADS)

    Zhang, Shi-Sheng; Peng, Jin-Peng; Smith, Michael S.; Arbanas, Goran; Kozub, Ray L.

    2014-09-01

    Predictions of direct neutron capture are of vital importance for simulations of nucleosynthesis in supernovae, merging neutron stars, and other astrophysical environments. We calculate the direct capture cross sections for E1 transitions using nuclear structure information from a covariant density functional theory as input for the FRESCO coupled-channels reaction code. We find good agreement of our predictions with experimental cross section data on the double closed-shell targets 16O, 48Ca, and 90Zr, and the exotic nucleus 36S. Extensions of the technique for unstable nuclei and for large-scale calculations will be discussed. Predictions of direct neutron capture are of vital importance for simulations of nucleosynthesis in supernovae, merging neutron stars, and other astrophysical environments. We calculate the direct capture cross sections for E1 transitions using nuclear structure information from a covariant density functional theory as input for the FRESCO coupled-channels reaction code. We find good agreement of our predictions with experimental cross section data on the double closed-shell targets 16O, 48Ca, and 90Zr, and the exotic nucleus 36S. Extensions of the technique for unstable nuclei and for large-scale calculations will be discussed. Supported by the U.S. Dept. of Energy, Office of Nuclear Physics.

  11. The Effect of Missing Data Handling Methods on Goodness of Fit Indices in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Köse, Alper

    2014-01-01

    The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…

  12. The Impact of Missing Data on the Detection of Nonuniform Differential Item Functioning

    ERIC Educational Resources Information Center

    Finch, W. Holmes

    2011-01-01

    Missing information is a ubiquitous aspect of data analysis, including responses to items on cognitive and affective instruments. Although the broader statistical literature describes missing data methods, relatively little work has focused on this issue in the context of differential item functioning (DIF) detection. Such prior research has…

  13. A Primer for Handling Missing Values in the Analysis of Education and Training Data

    ERIC Educational Resources Information Center

    Gemici, Sinan; Bednarz, Alice; Lim, Patrick

    2012-01-01

    Quantitative research in vocational education and training (VET) is routinely affected by missing or incomplete information. However, the handling of missing data in published VET research is often sub-optimal, leading to a real risk of generating results that can range from being slightly biased to being plain wrong. Given that the growing…

  14. Missing Data and Multiple Imputation: An Unbiased Approach

    NASA Technical Reports Server (NTRS)

    Foy, M.; VanBaalen, M.; Wear, M.; Mendez, C.; Mason, S.; Meyers, V.; Alexander, D.; Law, J.

    2014-01-01

    The default method of dealing with missing data in statistical analyses is to only use the complete observations (complete case analysis), which can lead to unexpected bias when data do not meet the assumption of missing completely at random (MCAR). For the assumption of MCAR to be met, missingness cannot be related to either the observed or unobserved variables. A less stringent assumption, missing at random (MAR), requires that missingness not be associated with the value of the missing variable itself, but can be associated with the other observed variables. When data are truly MAR as opposed to MCAR, the default complete case analysis method can lead to biased results. There are statistical options available to adjust for data that are MAR, including multiple imputation (MI) which is consistent and efficient at estimating effects. Multiple imputation uses informing variables to determine statistical distributions for each piece of missing data. Then multiple datasets are created by randomly drawing on the distributions for each piece of missing data. Since MI is efficient, only a limited number, usually less than 20, of imputed datasets are required to get stable estimates. Each imputed dataset is analyzed using standard statistical techniques, and then results are combined to get overall estimates of effect. A simulation study will be demonstrated to show the results of using the default complete case analysis, and MI in a linear regression of MCAR and MAR simulated data. Further, MI was successfully applied to the association study of CO2 levels and headaches when initial analysis showed there may be an underlying association between missing CO2 levels and reported headaches. Through MI, we were able to show that there is a strong association between average CO2 levels and the risk of headaches. Each unit increase in CO2 (mmHg) resulted in a doubling in the odds of reported headaches.

  15. Missing data estimation in morphometrics: how much is too much?

    PubMed

    Clavel, Julien; Merceron, Gildas; Escarguel, Gilles

    2014-03-01

    Fossil-based estimates of diversity and evolutionary dynamics mainly rely on the study of morphological variation. Unfortunately, organism remains are often altered by post-mortem taphonomic processes such as weathering or distortion. Such a loss of information often prevents quantitative multivariate description and statistically-controlled comparisons of extinct species based on morphometric data. A common way to deal with missing data involves imputation methods that directly fill the missing cases with model estimates. Over the last years, several empirically-determined thresholds for the maximum acceptable proportion of missing values have been proposed in the literature, whereas other studies showed that this limit actually depends on various properties of the study data set and of the selected imputation method, and is by no way generalizable. We evaluate the relative performances of seven multiple imputation (MI) techniques through a simulation-based analysis under three distinct patterns of missing data distribution. Overall, Fully Conditional Specification and Expectation-Maximization algorithms provide the best compromises between imputation accuracy and coverage probability. MI techniques appear remarkably robust to the violation of basic assumptions such as the occurrence of taxonomically or anatomically biased patterns of missing data distribution, making differences in simulation results between the three patterns of missing data distribution much smaller than differences between the individual MI techniques. Based on these results, rather than proposing a new (set of) threshold value(s), we develop an approach combining the use of MIs with procrustean superimposition of principal component analysis results, in order to directly visualize the effect of individual missing data imputation on an ordinated space. We provide an R function for users to implement the proposed procedure.

  16. Combining contingency tables with missing dimensions.

    PubMed

    Dominici, F

    2000-06-01

    We propose a methodology for estimating the cell probabilities in a multiway contingency table by combining partial information from a number of studies when not all of the variables are recorded in all studies. We jointly model the full set of categorical variables recorded in at least one of the studies, and we treat the variables that are not reported as missing dimensions of the study-specific contingency table. For example, we might be interested in combining several cohort studies in which the incidence in the exposed and nonexposed groups is not reported for all risk factors in all studies while the overall numbers of cases and cohort size is always available. To account for study-to-study variability, we adopt a Bayesian hierarchical model. At the first stage of the model, the observation stage, data are modeled by a multinomial distribution with fixed total number of observations. At the second stage, we use the logistic normal (LN) distribution to model variability in the study-specific cells' probabilities. Using this model and data augmentation techniques, we reconstruct the contingency table for each study regardless of which dimensions are missing, and we estimate population parameters of interest. Our hierarchical procedure borrows strength from all the studies and accounts for correlations among the cells' probabilities. The main difficulty in combining studies recording different variables is in maintaining a consistent interpretation of parameters across studies. The approach proposed here overcomes this difficulty and at the same time addresses the uncertainty arising from the missing dimensions. We apply our modeling strategy to analyze data on air pollution and mortality from 1987 to 1994 for six U.S. cities by combining six cross-classifications of low, medium, and high levels of mortality counts, particulate matter, ozone, and carbon monoxide with the complication that four of the six cities do not report all the air pollution variables. Our

  17. Missed Appendicitis: Mimicking Urologic Symptoms

    PubMed Central

    Akhavizadegan, Hamed

    2012-01-01

    Appendicitis, a common disease, has different presentations. This has made its diagnosis difficult. This paper aims to present two cases of missed appendicitis with completely urologic presentation and the way that helped us to reach the correct diagnosis. The first case with symptoms fully related to kidney and the second mimicking epididymorchitis hindered prompt diagnosis. Right site of the pain, relapsing fever, frequent physical examination, and resistance to medical treatment were main clues which help us to make correct diagnosis. PMID:23326748

  18. Nuclear Forensics Analysis with Missing and Uncertain Data

    SciTech Connect

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained by replacing missing information with constant values.

  19. What is the difference between missing completely at random and missing at random?

    PubMed

    Bhaskaran, Krishnan; Smeeth, Liam

    2014-08-01

    The terminology describing missingness mechanisms is confusing. In particular the meaning of 'missing at random' is often misunderstood, leading researchers faced with missing data problems away from multiple imputation, a method with considerable advantages. The purpose of this article is to clarify how 'missing at random' differs from 'missing completely at random' via an imagined dialogue between a clinical researcher and statistician.

  20. Lorentz Covariant Distributions with Spectral Conditions

    SciTech Connect

    Zinoviev, Yury M.

    2007-11-14

    The properties of the vacuum expectation values of products of the quantum fields are formulated in the book [1]. The vacuum expectation values of quantum fields products would be the Fourier transforms of the Lorentz covariant tempered distributions with supports in the product of the closed upper light cones. Lorentz invariant distributions are studied in the papers [2]--[4]. The authors of these papers wanted to describe Lorentz invariant distributions in terms of distributions given on the Lorentz group orbit space. This orbit space has a complicated structure. It is noted [5] that a tempered distribution with support in the closed upper light cone may be represented as the action of the wave operator in some power on a differentiable function with support in the closed upper light cone. For the description of the Lorentz covariant differentiable functions the boundary of the closed upper light cone is not important. The measure of this boundary is zero.

  1. Chiral four-dimensional heterotic covariant lattices

    NASA Astrophysics Data System (ADS)

    Beye, Florian

    2014-11-01

    In the covariant lattice formalism, chiral four-dimensional heterotic string vacua are obtained from certain even self-dual lattices which completely decompose into a left-mover and a right-mover lattice. The main purpose of this work is to classify all right-mover lattices that can appear in such a chiral model, and to study the corresponding left-mover lattices using the theory of lattice genera. In particular, the Smith-Minkowski-Siegel mass formula is employed to calculate a lower bound on the number of left-mover lattices. Also, the known relationship between asymmetric orbifolds and covariant lattices is considered in the context of our classification.

  2. On covariance structure in noisy, big data

    NASA Astrophysics Data System (ADS)

    Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.

    2013-09-01

    Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.

  3. Torsion and geometrostasis in covariant superstrings

    SciTech Connect

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.

  4. Discrete symmetries in covariant loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Rovelli, Carlo; Wilson-Ewing, Edward

    2012-09-01

    We study time-reversal and parity—on the physical manifold and in internal space—in covariant loop gravity. We consider a minor modification of the Holst action which makes it transform coherently under such transformations. The classical theory is not affected but the quantum theory is slightly different. In particular, the simplicity constraints are slightly modified and this restricts orientation flips in a spin foam to occur only across degenerate regions, thus reducing the sources of potential divergences.

  5. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  6. Linear Covariance Analysis for a Lunar Lander

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael

    2017-01-01

    A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.

  7. Covariant quantization of the CBS superparticle

    NASA Astrophysics Data System (ADS)

    Grassi, P. A.; Policastro, G.; Porrati, M.

    2001-07-01

    The quantization of the Casalbuoni-Brink-Schwarz superparticle is performed in an explicitly covariant way using the antibracket formalism. Since an infinite number of ghost fields are required, within a suitable off-shell twistor-like formalism, we are able to fix the gauge of each ghost sector without modifying the physical content of the theory. The computation reveals that the antibracket cohomology contains only the physical degrees of freedom.

  8. Twisted covariant noncommutative self-dual gravity

    SciTech Connect

    Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.

    2008-12-15

    A twisted covariant formulation of noncommutative self-dual gravity is presented. The formulation for constructing twisted noncommutative Yang-Mills theories is used. It is shown that the noncommutative torsion is solved at any order of the {theta} expansion in terms of the tetrad and some extra fields of the theory. In the process the first order expansion in {theta} for the Plebanski action is explicitly obtained.

  9. Genomic Variants Revealed by Invariably Missing Genotypes in Nelore Cattle

    PubMed Central

    da Silva, Joaquim Manoel; Giachetto, Poliana Fernanda; da Silva, Luiz Otávio Campos; Cintra, Leandro Carrijo; Paiva, Samuel Rezende; Caetano, Alexandre Rodrigues; Yamagishi, Michel Eduardo Beleza

    2015-01-01

    High density genotyping panels have been used in a wide range of applications. From population genetics to genome-wide association studies, this technology still offers the lowest cost and the most consistent solution for generating SNP data. However, in spite of the application, part of the generated data is always discarded from final datasets based on quality control criteria used to remove unreliable markers. Some discarded data consists of markers that failed to generate genotypes, labeled as missing genotypes. A subset of missing genotypes that occur in the whole population under study may be caused by technical issues but can also be explained by the presence of genomic variations that are in the vicinity of the assayed SNP and that prevent genotyping probes from annealing. The latter case may contain relevant information because these missing genotypes might be used to identify population-specific genomic variants. In order to assess which case is more prevalent, we used Illumina HD Bovine chip genotypes from 1,709 Nelore (Bos indicus) samples. We found 3,200 missing genotypes among the whole population. NGS re-sequencing data from 8 sires were used to verify the presence of genomic variations within their flanking regions in 81.56% of these missing genotypes. Furthermore, we discovered 3,300 novel SNPs/Indels, 31% of which are located in genes that may affect traits of importance for the genetic improvement of cattle production. PMID:26305794

  10. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  11. Shrinkage covariance matrix approach for microarray data

    NASA Astrophysics Data System (ADS)

    Karjanto, Suryaefiza; Aripin, Rasimah

    2013-04-01

    Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n < p. This leads to a biased estimate of the covariance matrix. In this study, the Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.

  12. All covariance controllers for linear discrete-time systems

    NASA Technical Reports Server (NTRS)

    Hsieh, Chen; Skelton, Robert E.

    1990-01-01

    The set of covariances that a linear discrete-time plant with a specified-order controller can have is characterized. The controllers that assign such covariances to any linear discrete-time system are given explicitly in closed form. The freedom in these covariance controllers is explicit and is parameterized by two orthogonal matrices. By appropriately choosing these free parameters, additional system objectives can be achieved without altering the state covariance, and the stability of the closed-loop system is guaranteed.

  13. Factorization of the Discrete Noise Covariance Matrix for Plans,

    DTIC Science & Technology

    1991-02-01

    rapport prdsente la formulation exacte de la matrice de covariance Qk necessaire pour la propagation de la matrice de covariance du filtre Kalman ...approximation la d6composition necessaire pour utiliser la formulation Biermann-Agee-Turner du filtre Kalman . Cette decomposition approximative est...form of the discrete driving noise covariance matrix Qk which is needed to propagate the covariance matrix in the Kalman filter used by PLANS. It is

  14. Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments

    ERIC Educational Resources Information Center

    Barker, Lynne A.; Andrade, Jackie

    2006-01-01

    In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…

  15. Earth Observation System Flight Dynamics System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  16. Covariate Selection in Propensity Scores Using Outcome Proxies

    ERIC Educational Resources Information Center

    Kelcey, Ben

    2011-01-01

    This study examined the practical problem of covariate selection in propensity scores (PSs) given a predetermined set of covariates. Because the bias reduction capacity of a confounding covariate is proportional to the concurrent relationships it has with the outcome and treatment, particular focus is set on how we might approximate…

  17. Covariate Imbalance and Precision in Measuring Treatment Effects

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2011-01-01

    Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…

  18. MISS- Mice on International Space Station

    NASA Astrophysics Data System (ADS)

    Falcetti, G. C.; Schiller, P.

    2005-08-01

    The use of rodents for scientific research to bridge the gap between cellular biology and human physiology is a new challenge within the history of successful developments of biological facilities. The ESA funded MISS Phase A/B study is aimed at developing a design concept for an animal holding facility able to support experimentation with mice on board the International Space Station (ISS).The MISS facility is composed of two main parts:1. The MISS Rack to perform scientific experiments onboard the ISS.2. The MISS Animals Transport Container (ATC) totransport animals from ground to orbit and vice- versa.The MISS facility design takes into account guidelines and recommendations used for mice well-being in ground laboratories. A summary of the MISS Rack and MISS ATC design concept is hereafter provided.

  19. Berkson's bias, selection bias, and missing data.

    PubMed

    Westreich, Daniel

    2012-01-01

    Although Berkson's bias is widely recognized in the epidemiologic literature, it remains underappreciated as a model of both selection bias and bias due to missing data. Simple causal diagrams and 2 × 2 tables illustrate how Berkson's bias connects to collider bias and selection bias more generally, and show the strong analogies between Berksonian selection bias and bias due to missing data. In some situations, considerations of whether data are missing at random or missing not at random are less important than the causal structure of the missing data process. Although dealing with missing data always relies on strong assumptions about unobserved variables, the intuitions built with simple examples can provide a better understanding of approaches to missing data in real-world situations.

  20. Missing data in the exposure of interest and marginal structural models: a simulation study based on the Framingham Heart Study.

    PubMed

    Shortreed, Susan M; Forbes, Andrew B

    2010-02-20

    Missing data are common in longitudinal studies and can occur in the exposure interest. There has been little work assessing the impact of missing data in marginal structural models (MSMs), which are used to estimate the effect of an exposure history on an outcome when time-dependent confounding is present. We design a series of simulations based on the Framingham Heart Study data set to investigate the impact of missing data in the primary exposure of interest in a complex, realistic setting. We use a standard application of MSMs to estimate the causal odds ratio of a specific activity history on outcome. We report and discuss the results of four missing data methods, under seven possible missing data structures, including scenarios in which an unmeasured variable predicts missing information. In all missing data structures, we found that a complete case analysis, where all subjects with missing exposure data are removed from the analysis, provided the least bias. An analysis that censored individuals at the first occasion of missing exposure and includes a censorship model as well as a propensity model when creating the inverse probability weights also performed well. The presence of an unmeasured predictor of missing data only slightly increased bias, except in the situation such that the exposure had a large impact on missing data and the unmeasured variable had a large impact on missing data and outcome. A discussion of the results is provided using causal diagrams, showing the usefulness of drawing such diagrams before conducting an analysis.

  1. A new estimation with minimum trace of asymptotic covariance matrix for incomplete longitudinal data with a surrogate process.

    PubMed

    Chen, Baojiang; Qin, Jing

    2013-11-30

    Missing data is a very common problem in medical and social studies, especially when data are collected longitudinally. It is a challenging problem to utilize observed data effectively. Many papers on missing data problems can be found in statistical literature. It is well known that the inverse weighted estimation is neither efficient nor robust. On the other hand, the doubly robust (DR) method can improve the efficiency and robustness. As is known, the DR estimation requires a missing data model (i.e., a model for the probability that data are observed) and a working regression model (i.e., a model for the outcome variable given covariates and surrogate variables). Because the DR estimating function has mean zero for any parameters in the working regression model when the missing data model is correctly specified, in this paper, we derive a formula for the estimator of the parameters of the working regression model that yields the optimally efficient estimator of the marginal mean model (the parameters of interest) when the missing data model is correctly specified. Furthermore, the proposed method also inherits the DR property. Simulation studies demonstrate the greater efficiency of the proposed method compared with the standard DR method. A longitudinal dementia data set is used for illustration.

  2. Comparing multiple imputation methods for systematically missing subject-level data.

    PubMed

    Kline, David; Andridge, Rebecca; Kaizar, Eloise

    2015-12-17

    When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Explicating the Conditions Under Which Multilevel Multiple Imputation Mitigates Bias Resulting from Random Coefficient-Dependent Missing Longitudinal Data.

    PubMed

    Gottfredson, Nisha C; Sterba, Sonya K; Jackson, Kristina M

    2017-01-01

    Random coefficient-dependent (RCD) missingness is a non-ignorable mechanism through which missing data can arise in longitudinal designs. RCD, for which we cannot test, is a problematic form of missingness that occurs if subject-specific random effects correlate with propensity for missingness or dropout. Particularly when covariate missingness is a problem, investigators typically handle missing longitudinal data by using single-level multiple imputation procedures implemented with long-format data, which ignores within-person dependency entirely, or implemented with wide-format (i.e., multivariate) data, which ignores some aspects of within-person dependency. When either of these standard approaches to handling missing longitudinal data is used, RCD missingness leads to parameter bias and incorrect inference. We explain why multilevel multiple imputation (MMI) should alleviate bias induced by a RCD missing data mechanism under conditions that contribute to stronger determinacy of random coefficients. We evaluate our hypothesis with a simulation study. Three design factors are considered: intraclass correlation (ICC; ranging from .25 to .75), number of waves (ranging from 4 to 8), and percent of missing data (ranging from 20 to 50%). We find that MMI greatly outperforms the single-level wide-format (multivariate) method for imputation under a RCD mechanism. For the MMI analyses, bias was most alleviated when the ICC is high, there were more waves of data, and when there was less missing data. Practical recommendations for handling longitudinal missing data are suggested.

  4. Acquiring observation error covariance information for land data assimilation systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent work has presented the initial application of adaptive filtering techniques to land surface data assimilation systems. Such techniques are motivated by our current lack of knowledge concerning the structure of large-scale error in either land surface modeling output or remotely-sensed estimat...

  5. Allowing for uncertainty due to missing continuous outcome data in pairwise and network meta-analysis.

    PubMed

    Mavridis, Dimitris; White, Ian R; Higgins, Julian P T; Cipriani, Andrea; Salanti, Georgia

    2015-02-28

    Missing outcome data are commonly encountered in randomized controlled trials and hence may need to be addressed in a meta-analysis of multiple trials. A common and simple approach to deal with missing data is to restrict analysis to individuals for whom the outcome was obtained (complete case analysis). However, estimated treatment effects from complete case analyses are potentially biased if informative missing data are ignored. We develop methods for estimating meta-analytic summary treatment effects for continuous outcomes in the presence of missing data for some of the individuals within the trials. We build on a method previously developed for binary outcomes, which quantifies the degree of departure from a missing at random assumption via the informative missingness odds ratio. Our new model quantifies the degree of departure from missing at random using either an informative missingness difference of means or an informative missingness ratio of means, both of which relate the mean value of the missing outcome data to that of the observed data. We propose estimating the treatment effects, adjusted for informative missingness, and their standard errors by a Taylor series approximation and by a Monte Carlo method. We apply the methodology to examples of both pairwise and network meta-analysis with multi-arm trials.

  6. Abnormalities in Structural Covariance of Cortical Gyrification in Parkinson's Disease

    PubMed Central

    Xu, Jinping; Zhang, Jiuquan; Zhang, Jinlei; Wang, Yue; Zhang, Yanling; Wang, Jian; Li, Guanglin; Hu, Qingmao; Zhang, Yuanchao

    2017-01-01

    Although abnormal cortical morphology and connectivity between brain regions (structural covariance) have been reported in Parkinson's disease (PD), the topological organizations of large-scale structural brain networks are still poorly understood. In this study, we investigated large-scale structural brain networks in a sample of 37 PD patients and 34 healthy controls (HC) by assessing the structural covariance of cortical gyrification with local gyrification index (lGI). We demonstrated prominent small-world properties of the structural brain networks for both groups. Compared with the HC group, PD patients showed significantly increased integrated characteristic path length and integrated clustering coefficient, as well as decreased integrated global efficiency in structural brain networks. Distinct distributions of hub regions were identified between the two groups, showing more hub regions in the frontal cortex in PD patients. Moreover, the modular analyses revealed significantly decreased integrated regional efficiency in lateral Fronto-Insula-Temporal module, and increased integrated regional efficiency in Parieto-Temporal module in the PD group as compared to the HC group. In summary, our study demonstrated altered topological properties of structural networks at a global, regional and modular level in PD patients. These findings suggests that the structural networks of PD patients have a suboptimal topological organization, resulting in less effective integration of information between brain regions. PMID:28326021

  7. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  8. Hydrographic responses to regional covariates across the Kara Sea

    NASA Astrophysics Data System (ADS)

    Mäkinen, Jussi; Vanhatalo, Jarno

    2016-12-01

    The Kara Sea is a shelf sea in the Arctic Ocean which has a strong spatiotemporal hydrographic variation driven by river discharge, air pressure, and sea ice. There is a lack of information about the effects of environmental variables on surface hydrography in different regions of the Kara Sea. We use a hierarchical spatially varying coefficient model to study the variation of sea surface temperature (SST) and salinity (SSS) in the Kara Sea between years 1980 and 2000. The model allows us to study the effects of climatic (Arctic oscillation index (AO)) and seasonal (river discharge and ice concentration) environmental covariates on hydrography. The hydrographic responses to covariates vary considerably between different regions of the Kara Sea. River discharge decreases SSS in the shallow shelf area and has a neutral effect in the northern Kara Sea. The responses of SST and SSS to AO show the effects of different wind and air pressure conditions on water circulation and hence on hydrography. Ice concentration has a constant effect across the Kara Sea. We estimated the average SST and SSS in the Kara Sea in 1980-2000. The average August SST over the Kara Sea in 1995-2000 was higher than the respective average in 1980-1984 with 99.9% probability and August SSS decreased with 77% probability between these time periods. We found a support that the winter season AO has an impact on the summer season hydrography, and temporal trends may be related to the varying level of winter season AO index.

  9. Handling of missing data to improve the mining of large feed databases.

    PubMed

    Maroto-Molina, F; Gómez-Cabrera, A; Guerrero-Ginel, J E; Garrido-Varo, A; Sauvant, D; Tran, G; Heuzé, V; Pérez-Marín, D C

    2013-01-01

    Feed databases often have missing data. Despite their potentially major effect on data analysis (e.g., as a source of biased results and loss of statistical power), database managers and nutrition researchers have paid little attention to missing data. This study evaluated various methods of handling missing data using mining outputs from a database containing data on chemical composition and nutritive value for 18,864 alfalfa samples. A complete reference dataset was obtained comprising the 2,303 cases with no missing data for the attributes CP, crude fiber (CF), NDF, ADF and ADL. This dataset was used to simulate 2 types of missing data (at random and not at random), each with 2 loss intensities (33 and 66%), thus yielding a total of 4 incomplete datasets. Missing data from these datasets were handled using 2 deletion methods and 4 imputation methods, and outputs in terms of the identification and typing of alfalfa (using ANOVA and descriptive statistics) and of correlations between attributes (using regressions) were compared with outputs from the complete dataset. Imputation methods, particularly model-based versions, were found to perform better than deletion methods in terms of maximizing information use and minimizing bias although the extent of differences between methods depended on the type of missing data. The best approximation to the uncertainty value was provided by multiple imputation methods. It was concluded that the choice of the most suitable method for handling missing data depended both on the type of missing data and on the purpose of data analysis.

  10. On the missing axiom of Quantum Mechanicss

    NASA Astrophysics Data System (ADS)

    D'Ariano, Giacomo Mauro

    2006-01-01

    The debate on the nature of quantum probabilities in relation to Quantum Non Locality has elevated Quantum Mechanics to the level of an Operational Epistemic Theory. In such context the quantum superposition principle has an extraneous non epistemic nature. This leads us to seek purely operational foundations for Quantum Mechanics, from which to derive the current mathematical axiomatization based on Hilbert spaces. In the present work I present a set of axioms of purely operational nature, based on a general definition of "the experiment", the operational/epistemic archetype of information retrieval from reality. As we will see, this starting point logically entails a series of notions [state, conditional state, local state, pure state, faithful state, instrument, propensity (i.e. "effect"), dynamical and informational equivalence, dynamical and informational compatibility, predictability, discriminability, programmability, locality, a-causality, rank of the state, maximally chaotic state, maximally entangled state, informationally complete propensity, etc.], along with a set of rules (addition, convex combination, partial orderings, … ), which, far from being of quantum origin as often considered, instead constitute the universal syntactic manual of the operational/epistemic approach. The missing ingredient is, of course, the quantum superposition axiom for probability amplitudes: for this I propose some substitute candidates of purely operational/epistemic nature.

  11. Anomalous lack of decoherence of the macroscopic quantum superpositions based on phase-covariant quantum cloning.

    PubMed

    De Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolò

    2009-09-04

    We show that all macroscopic quantum superpositions (MQS) based on phase-covariant quantum cloning are characterized by an anomalous high resilence to the decoherence processes. The analysis supports the results of recent MQS experiments and leads to conceive a useful conjecture regarding the realization of complex decoherence-free structures for quantum information, such as the quantum computer.

  12. Anomalous Lack of Decoherence of the Macroscopic Quantum Superpositions Based on Phase-Covariant Quantum Cloning

    NASA Astrophysics Data System (ADS)

    de Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolò

    2009-09-01

    We show that all macroscopic quantum superpositions (MQS) based on phase-covariant quantum cloning are characterized by an anomalous high resilence to the decoherence processes. The analysis supports the results of recent MQS experiments and leads to conceive a useful conjecture regarding the realization of complex decoherence-free structures for quantum information, such as the quantum computer.

  13. Realization of a universal and phase-covariant quantum cloning machine in separate cavities

    SciTech Connect

    Fang Baolong; Song Qingming; Ye Liu

    2011-04-15

    We present a scheme to realize a special quantum cloning machine in separate cavities. The quantum cloning machine can copy the quantum information from a photon pulse to two distant atoms. Choosing the different parameters, the method can perform optimal symmetric (asymmetric) universal quantum cloning and optimal symmetric (asymmetric) phase-covariant cloning.

  14. A covariance-based anomaly detector for polarimetric remote sensing applications

    NASA Astrophysics Data System (ADS)

    Romano, Joao M.; Rosario, Dalton

    2014-05-01

    The proposed paper recommends a new anomaly detection algorithm for polarimetric remote sensing applications based on the M-Box covariance test by taking advantage of key features found in a multi-polarimetric data cube. The paper demonstrates: 1) that independent polarization measurements contain information suitable for manmade object discrimination from natural clutter; 2) analysis between the variability exhibited by manmade objects relative to natural clutter; 3) comparison between the proposed M-Box covariance test with Stokes parameters S0 and S1, DoLP, RX­ Stokes, and PCA RX-Stokes; and finally 4) the data used for the comparison spans a full24-hour measurement.

  15. Schur Complement Inequalities for Covariance Matrices and Monogamy of Quantum Correlations

    NASA Astrophysics Data System (ADS)

    Lami, Ludovico; Hirche, Christoph; Adesso, Gerardo; Winter, Andreas

    2016-11-01

    We derive fundamental constraints for the Schur complement of positive matrices, which provide an operator strengthening to recently established information inequalities for quantum covariance matrices, including strong subadditivity. This allows us to prove general results on the monogamy of entanglement and steering quantifiers in continuous variable systems with an arbitrary number of modes per party. A powerful hierarchical relation for correlation measures based on the log-determinant of covariance matrices is further established for all Gaussian states, which has no counterpart among quantities based on the conventional von Neumann entropy.

  16. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  17. Cosmology of a covariant Galilean field.

    PubMed

    De Felice, Antonio; Tsujikawa, Shinji

    2010-09-10

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  18. Minimal covariant observables identifying all pure states

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Heinosaari, Teiko; Toigo, Alessandro

    2013-09-01

    It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d-4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.

  19. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  20. Covariant harmonic oscillators and coupled harmonic oscillators

    NASA Technical Reports Server (NTRS)

    Han, Daesoo; Kim, Young S.; Noz, Marilyn E.

    1995-01-01

    It is shown that the system of two coupled harmonic oscillators shares the basic symmetry properties with the covariant harmonic oscillator formalism which provides a concise description of the basic features of relativistic hadronic features observed in high-energy laboratories. It is shown also that the coupled oscillator system has the SL(4,r) symmetry in classical mechanics, while the present formulation of quantum mechanics can accommodate only the Sp(4,r) portion of the SL(4,r) symmetry. The possible role of the SL(4,r) symmetry in quantum mechanics is discussed.

  1. Covariant change of signature in classical relativity

    NASA Astrophysics Data System (ADS)

    Ellis, G. F. R.

    1992-10-01

    This paper gives a covariant formalism enabling investigation of the possibility of change of signature in classical General Relativity, when the geometry is that of a Robertson-Walker universe. It is shown that such changes are compatible with the Einstein field equations, both in the case of a barotropic fluid and of a scalar field. A criterion is given for when such a change of signature should take place in the scalar field case. Some examples show the kind of resulting exact solutions of the field equations.

  2. Learning through Feature Prediction: An Initial Investigation into Teaching Categories to Children with Autism through Predicting Missing Features

    ERIC Educational Resources Information Center

    Sweller, Naomi

    2015-01-01

    Individuals with autism have difficulty generalising information from one situation to another, a process that requires the learning of categories and concepts. Category information may be learned through: (1) classifying items into categories, or (2) predicting missing features of category items. Predicting missing features has to this point been…

  3. Covariation and phenotypic integration in chemical communication displays: biosynthetic constraints and eco-evolutionary implications.

    PubMed

    Junker, Robert R; Kuppler, Jonas; Amo, Luisa; Blande, James D; Borges, Renee M; van Dam, Nicole M; Dicke, Marcel; Dötterl, Stefan; Ehlers, Bodil K; Etl, Florian; Gershenzon, Jonathan; Glinwood, Robert; Gols, Rieta; Groot, Astrid T; Heil, Martin; Hoffmeister, Mathias; Holopainen, Jarmo K; Jarau, Stefan; John, Lena; Kessler, Andre; Knudsen, Jette T; Kost, Christian; Larue-Kontic, Anne-Amélie C; Leonhardt, Sara Diana; Lucas-Barbosa, Dani; Majetic, Cassie J; Menzel, Florian; Parachnowitsch, Amy L; Pasquet, Rémy S; Poelman, Erik H; Raguso, Robert A; Ruther, Joachim; Schiestl, Florian P; Schmitt, Thomas; Tholl, Dorothea; Unsicker, Sybille B; Verhulst, Niels; Visser, Marcel E; Weldegergis, Berhane T; Köllner, Tobias G

    2017-03-03

    Chemical communication is ubiquitous. The identification of conserved structural elements in visual and acoustic communication is well established, but comparable information on chemical communication displays (CCDs) is lacking. We assessed the phenotypic integration of CCDs in a meta-analysis to characterize patterns of covariation in CCDs and identified functional or biosynthetically constrained modules. Poorly integrated plant CCDs (i.e. low covariation between scent compounds) support the notion that plants often utilize one or few key compounds to repel antagonists or to attract pollinators and enemies of herbivores. Animal CCDs (mostly insect pheromones) were usually more integrated than those of plants (i.e. stronger covariation), suggesting that animals communicate via fixed proportions among compounds. Both plant and animal CCDs were composed of modules, which are groups of strongly covarying compounds. Biosynthetic similarity of compounds revealed biosynthetic constraints in the covariation patterns of plant CCDs. We provide a novel perspective on chemical communication and a basis for future investigations on structural properties of CCDs. This will facilitate identifying modules and biosynthetic constraints that may affect the outcome of selection and thus provide a predictive framework for evolutionary trajectories of CCDs in plants and animals.

  4. Non-linear shrinkage estimation of large-scale structure covariance

    NASA Astrophysics Data System (ADS)

    Joachimi, Benjamin

    2017-03-01

    In many astrophysical settings, covariance matrices of large data sets have to be determined empirically from a finite number of mock realizations. The resulting noise degrades inference and precludes it completely if there are fewer realizations than data points. This work applies a recently proposed non-linear shrinkage estimator of covariance to a realistic example from large-scale structure cosmology. After optimizing its performance for the usage in likelihood expressions, the shrinkage estimator yields subdominant bias and variance comparable to that of the standard estimator with a factor of ∼50 less realizations. This is achieved without any prior information on the properties of the data or the structure of the covariance matrix, at a negligible computational cost.

  5. Are all biases missing data problems?

    PubMed

    Howe, Chanelle J; Cain, Lauren E; Hogan, Joseph W

    2015-09-01

    Estimating causal effects is a frequent goal of epidemiologic studies. Traditionally, there have been three established systematic threats to consistent estimation of causal effects. These three threats are bias due to confounders, selection, and measurement error. Confounding, selection, and measurement bias have typically been characterized as distinct types of biases. However, each of these biases can also be characterized as missing data problems that can be addressed with missing data solutions. Here we describe how the aforementioned systematic threats arise from missing data as well as review methods and their related assumptions for reducing each bias type. We also link the assumptions made by the reviewed methods to the missing completely at random (MCAR) and missing at random (MAR) assumptions made in the missing data framework that allow for valid inferences to be made based on the observed, incomplete data.

  6. Evolutionary Characteristics of Missing Proteins: Insights into the Evolution of Human Chromosomes Related to Missing-Protein-Encoding Genes.

    PubMed

    Xu, Aishi; Li, Guang; Yang, Dong; Wu, Songfeng; Ouyang, Hongsheng; Xu, Ping; He, Fuchu

    2015-12-04

    Although the "missing protein" is a temporary concept in C-HPP, the biological information for their "missing" could be an important clue in evolutionary studies. Here we classified missing-protein-encoding genes into two groups, the genes encoding PE2 proteins (with transcript evidence) and the genes encoding PE3/4 proteins (with no transcript evidence). These missing-protein-encoding genes distribute unevenly among different chromosomes, chromosomal regions, or gene clusters. In the view of evolutionary features, PE3/4 genes tend to be young, spreading at the nonhomology chromosomal regions and evolving at higher rates. Interestingly, there is a higher proportion of singletons in PE3/4 genes than the proportion of singletons in all genes (background) and OTCSGs (organ, tissue, cell type-specific genes). More importantly, most of the paralogous PE3/4 genes belong to the newly duplicated members of the paralogous gene groups, which mainly contribute to special biological functions, such as "smell perception". These functions are heavily restricted into specific type of cells, tissues, or specific developmental stages, acting as the new functional requirements that facilitated the emergence of the missing-protein-encoding genes during evolution. In addition, the criteria for the extremely special physical-chemical proteins were first set up based on the properties of PE2 proteins, and the evolutionary characteristics of those proteins were explored. Overall, the evolutionary analyses of missing-protein-encoding genes are expected to be highly instructive for proteomics and functional studies in the future.

  7. Causal Learning Mechanisms in Very Young Children: Two-, Three-, and Four-Year-Olds Infer Causal Relations from Patterns of Variation and Covariation.

    ERIC Educational Resources Information Center

    Gopnik, Alison; Sobel, David M.; Schulz, Laura E.; Glymour, Clark

    2001-01-01

    Investigated in 3 studies whether 2- to 4-year-olds make accurate causal inferences on the basis of patterns of variation and covariation. Found that all three age groups considered information from various patterns of variation and covariation in judgments regarding two objects and activation of a machine. Three- and 4-year-olds used the…

  8. Noisy covariance matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, S.; Kondor, I.

    2002-05-01

    According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.

  9. Covariant perturbations in a multifluid cosmological medium

    NASA Astrophysics Data System (ADS)

    Dunsby, Peter K. S.; Bruni, Marco; Ellis, George F. R.

    1992-08-01

    In a series of recent papers, a new covariant formalism was introduced to treat inhomogeneities in any spacetime. The variables introduced in these papers are gauge-invariant with respect to a Robertson-Walker background spacetime because they vanish identically in such models, and they have a transparent physical meaning. Exact evolution equations were found for these variables, and the linearized form of these equations were obtained, showing that they give the standard results for a barotropic perfect fluid. In this paper we extend this formalism to the general case of multicomponent fluid sources with interactions between them. We show, using the tilted formalism of King and Ellis, (1973) that choosing either the energy frame or the particle frame gives rise to a set of physically well-defined covariant and gauge-invariant variables which describe density and velocity perturbations, both for the total fluid and its constituent components. We then derive a complete set of equations for these variables and show, through harmonic analysis, that they are equivalent to those of Bardeen (1980) and of Kodama and Sasaki (1984). We discuss a number of interesting applications, including the case where the universe is filled with a mixture of baryons and radiation, coupled through Thomson scattering, and we derive solutions for the density and velocity perturbations in the large-scale limit. We also correct a number of errors in the previous literature.

  10. Modeling Covariance Matrices via Partial Autocorrelations

    PubMed Central

    Daniels, M.J.; Pourahmadi, M.

    2009-01-01

    Summary We study the role of partial autocorrelations in the reparameterization and parsimonious modeling of a covariance matrix. The work is motivated by and tries to mimic the phenomenal success of the partial autocorrelations function (PACF) in model formulation, removing the positive-definiteness constraint on the autocorrelation function of a stationary time series and in reparameterizing the stationarity-invertibility domain of ARMA models. It turns out that once an order is fixed among the variables of a general random vector, then the above properties continue to hold and follows from establishing a one-to-one correspondence between a correlation matrix and its associated matrix of partial autocorrelations. Connections between the latter and the parameters of the modified Cholesky decomposition of a covariance matrix are discussed. Graphical tools similar to partial correlograms for model formulation and various priors based on the partial autocorrelations are proposed. We develop frequentist/Bayesian procedures for modelling correlation matrices, illustrate them using a real dataset, and explore their properties via simulations. PMID:20161018

  11. Unsupervised segmentation of polarimetric SAR data using the covariance matrix

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Rama; Dubois, Pascale C.

    1992-01-01

    A method for unsupervised segmentation of polarimetric synthetic aperture radar (SAR) data into classes of homogeneous microwave polarimetric backscatter characteristics is presented. Classes of polarimetric backscatter are selected on the basis of a multidimensional fuzzy clustering of the logarithm of the parameters composing the polarimetric covariance matrix. The clustering procedure uses both polarimetric amplitude and phase information, is adapted to the presence of image speckle, and does not require an arbitrary weighting of the different polarimetric channels; it also provides a partitioning of each data sample used for clustering into multiple clusters. Given the classes of polarimetric backscatter, the entire image is classified using a maximum a posteriori polarimetric classifier. Four-look polarimetric SAR complex data of lava flows and of sea ice acquired by the NASA/JPL airborne polarimetric radar (AIRSAR) are segmented using this technique. The results are discussed and compared with those obtained using supervised techniques.

  12. Spatial regression with covariate measurement error: A semiparametric approach.

    PubMed

    Huque, Md Hamidul; Bondell, Howard D; Carroll, Raymond J; Ryan, Louise M

    2016-09-01

    Spatial data have become increasingly common in epidemiology and public health research thanks to advances in GIS (Geographic Information Systems) technology. In health research, for example, it is common for epidemiologists to incorporate geographically indexed data into their studies. In practice, however, the spatially defined covariates are often measured with error. Naive estimators of regression coefficients are attenuated if measurement error is ignored. Moreover, the classical measurement error theory is inapplicable in the context of spatial modeling because of the presence of spatial correlation among the observations. We propose a semiparametric regression approach to obtain bias-corrected estimates of regression parameters and derive their large sample properties. We evaluate the performance of the proposed method through simulation studies and illustrate using data on Ischemic Heart Disease (IHD). Both simulation and practical application demonstrate that the proposed method can be effective in practice.

  13. Multiple imputation: dealing with missing data.

    PubMed

    de Goeij, Moniek C M; van Diepen, Merel; Jager, Kitty J; Tripepi, Giovanni; Zoccali, Carmine; Dekker, Friedo W

    2013-10-01

    In many fields, including the field of nephrology, missing data are unfortunately an unavoidable problem in clinical/epidemiological research. The most common methods for dealing with missing data are complete case analysis-excluding patients with missing data--mean substitution--replacing missing values of a variable with the average of known values for that variable-and last observation carried forward. However, these methods have severe drawbacks potentially resulting in biased estimates and/or standard errors. In recent years, a new method has arisen for dealing with missing data called multiple imputation. This method predicts missing values based on other data present in the same patient. This procedure is repeated several times, resulting in multiple imputed data sets. Thereafter, estimates and standard errors are calculated in each imputation set and pooled into one overall estimate and standard error. The main advantage of this method is that missing data uncertainty is taken into account. Another advantage is that the method of multiple imputation gives unbiased results when data are missing at random, which is the most common type of missing data in clinical practice, whereas conventional methods do not. However, the method of multiple imputation has scarcely been used in medical literature. We, therefore, encourage authors to do so in the future when possible.

  14. Missing value imputation strategies for metabolomics data.

    PubMed

    Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral

    2015-12-01

    The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros.

  15. Sea-surface salinity: the missing measurement

    NASA Astrophysics Data System (ADS)

    Stocker, Erich F.; Koblinsky, Chester

    2003-04-01

    Even the youngest child knows that the sea is salty. Yet, routine, global information about the degree of saltiness and the distribution of the salinity is not available. Indeed, the sea surface salinity measurement is a key missing measurement in global change research. Salinity influences circulation and links the ocean to global change and the water-cycle. Space-based remote sensing of important global change ocean parameters such as sea-surface temperature and water-cycle parameters such as precipitation have been available to the research community but a space-based global sensing of salinity has been missing. In July 2002, the National Aeronautical and Space Administration (NASA) announced that the Aquarius mission, focused on the global measurement of sea surface salinity, is one of the missions approved under its ESSP-3 program. Aquarius will begin a risk-reduction phase during 2003. Aquarius will carry a multi-beam 1.4 GHz (L-band) radiometer used for retrieving salinity. It also will carry a 1.2 GHz (L-band) scatterometer used for measuring surface roughness. Aquarius is tentatively scheduled for a 2006 launch into an 8-day Sun-synchronous orbit. Aquarius key science data product will be a monthly, global surface salinity map at 100 km resolution with an accuracy of 0.2 practical salinity units. Aquarius will have a 3 year operational period. Among other things, global salinity data will permit estimates of sea surface density, or buoyancy, that drives the ocean's three-dimensional circulation.

  16. A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu

    2007-01-01

    Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…

  17. Covariance Partition Priors: A Bayesian Approach to Simultaneous Covariance Estimation for Longitudinal Data.

    PubMed

    Gaskins, J T; Daniels, M J

    2016-01-02

    The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.

  18. Evaluating covariance in prognostic and system health management applications

    NASA Astrophysics Data System (ADS)

    Menon, Sandeep; Jin, Xiaohang; Chow, Tommy W. S.; Pecht, Michael

    2015-06-01

    Developing a diagnostic and prognostic health management system involves analyzing system parameters monitored during the lifetime of the system. This data analysis may involve multiple steps, including data reduction, feature extraction, clustering and classification, building control charts, identification of anomalies, and modeling and predicting parameter degradation in order to evaluate the state of health for the system under investigation. Evaluating the covariance between the monitored system parameters allows for better understanding of the trends in monitored system data, and therefore it is an integral part of the data analysis. Typically, a sample covariance matrix is used to evaluate the covariance between monitored system parameters. The monitored system data are often sensor data, which are inherently noisy. The noise in sensor data can lead to inaccurate evaluation of the covariance in data using a sample covariance matrix. This paper examines approaches to evaluate covariance, including the minimum volume ellipsoid, the minimum covariance determinant, and the nearest neighbor variance estimation. When the performance of these approaches was evaluated on datasets with increasing percentage of Gaussian noise, it was observed that the nearest neighbor variance estimation exhibited the most stable estimates of covariance. To improve the accuracy of covariance estimates using nearest neighbor-based methodology, a modified approach for the nearest neighbor variance estimation technique is developed in this paper. Case studies based on data analysis steps involved in prognostic solutions are developed in order to compare the performance of the covariance estimation methodologies discussed in the paper.

  19. The Impact of Covariate Measurement Error on Risk Prediction

    PubMed Central

    Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna

    2015-01-01

    In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses’ Health Study. PMID:25865315

  20. Toward a Mexican eddy covariance network for carbon cycle science

    NASA Astrophysics Data System (ADS)

    Vargas, Rodrigo; Yépez, Enrico A.

    2011-09-01

    First Annual MexFlux Principal Investigators Meeting; Hermosillo, Sonora, Mexico, 4-8 May 2011; The carbon cycle science community has organized a global network, called FLUXNET, to measure the exchange of energy, water, and carbon dioxide (CO2) between the ecosystems and the atmosphere using the eddy covariance technique. This network has provided unprecedented information for carbon cycle science and global climate change but is mostly represented by study sites in the United States and Europe. Thus, there is an important gap in measurements and understanding of ecosystem dynamics in other regions of the world that are seeing a rapid change in land use. Researchers met under the sponsorship of Red Temática de Ecosistemas and Consejo Nacional de Ciencia y Tecnologia (CONACYT) to discuss strategies to establish a Mexican eddy covariance network (MexFlux) by identifying researchers, study sites, and scientific goals. During the meeting, attendees noted that 10 study sites have been established in Mexico with more than 30 combined years of information. Study sites span from new sites installed during 2011 to others with 9 to 6 years of measurements. Sites with the longest span measurements are located in Baja California Sur (established by Walter Oechel in 2002) and Sonora (established by Christopher Watts in 2005); both are semiarid ecosystems. MexFlux sites represent a variety of ecosystem types, including Mediterranean and sarcocaulescent shrublands in Baja California; oak woodland, subtropical shrubland, tropical dry forest, and a grassland in Sonora; tropical dry forests in Jalisco and Yucatan; a managed grassland in San Luis Potosi; and a managed pine forest in Hidalgo. Sites are maintained with an individual researcher's funds from Mexican government agencies (e.g., CONACYT) and international collaborations, but no coordinated funding exists for a long-term program.

  1. Bayesian Inference for Multivariate Meta-regression with a Partially Observed Within-Study Sample Covariance Matrix

    PubMed Central

    Yao, Hui; Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2015-01-01

    Summary Multivariate meta-regression models are commonly used in settings where the response variable is naturally multi-dimensional. Such settings are common in cardiovascular and diabetes studies where the goal is to study cholesterol levels once a certain medication is given. In this setting, the natural multivariate endpoint is Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). In this paper, we examine study level (aggregate) multivariate meta-data from 26 Merck sponsored double-blind, randomized, active or placebo-controlled clinical trials on adult patients with primary hypercholesterolemia. Our goal is to develop a methodology for carrying out Bayesian inference for multivariate meta-regression models with study level data when the within-study sample covariance matrix S for the multivariate response data is partially observed. Specifically, the proposed methodology is based on postulating a multivariate random effects regression model with an unknown within-study covariance matrix Σ in which we treat the within-study sample correlations as missing data, the standard deviations of the within-study sample covariance matrix S are assumed observed, and given Σ, S follows a Wishart distribution. Thus, we treat the off-diagonal elements of S as missing data, and these missing elements are sampled from the appropriate full conditional distribution in a Markov chain Monte Carlo (MCMC) sampling scheme via a novel transformation based on partial correlations. We further propose several structures (models) for Σ, which allow for borrowing strength across different treatment arms and trials. The proposed methodology is assessed using simulated as well as real data, and the results are shown to be quite promising. PMID:26257452

  2. Bayesian Inference for Multivariate Meta-regression with a Partially Observed Within-Study Sample Covariance Matrix.

    PubMed

    Yao, Hui; Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2015-06-01

    Multivariate meta-regression models are commonly used in settings where the response variable is naturally multi-dimensional. Such settings are common in cardiovascular and diabetes studies where the goal is to study cholesterol levels once a certain medication is given. In this setting, the natural multivariate endpoint is Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). In this paper, we examine study level (aggregate) multivariate meta-data from 26 Merck sponsored double-blind, randomized, active or placebo-controlled clinical trials on adult patients with primary hypercholesterolemia. Our goal is to develop a methodology for carrying out Bayesian inference for multivariate meta-regression models with study level data when the within-study sample covariance matrix S for the multivariate response data is partially observed. Specifically, the proposed methodology is based on postulating a multivariate random effects regression model with an unknown within-study covariance matrix Σ in which we treat the within-study sample correlations as missing data, the standard deviations of the within-study sample covariance matrix S are assumed observed, and given Σ, S follows a Wishart distribution. Thus, we treat the off-diagonal elements of S as missing data, and these missing elements are sampled from the appropriate full conditional distribution in a Markov chain Monte Carlo (MCMC) sampling scheme via a novel transformation based on partial correlations. We further propose several structures (models) for Σ, which allow for borrowing strength across different treatment arms and trials. The proposed methodology is assessed using simulated as well as real data, and the results are shown to be quite promising.

  3. The prevention and handling of the missing data.

    PubMed

    Kang, Hyun

    2013-05-01

    Even in a well-designed and controlled study, missing data occurs in almost all research. Missing data can reduce the statistical power of a study and can produce biased estimates, leading to invalid conclusions. This manuscript reviews the problems and types of missing data, along with the techniques for handling missing data. The mechanisms by which missing data occurs are illustrated, and the methods for handling the missing data are discussed. The paper concludes with recommendations for the handling of missing data.

  4. Using Principal Components as Auxiliary Variables in Missing Data Estimation.

    PubMed

    Howard, Waylon J; Rhemtulla, Mijke; Little, Todd D

    2015-01-01

    To deal with missing data that arise due to participant nonresponse or attrition, methodologists have recommended an "inclusive" strategy where a large set of auxiliary variables are used to inform the missing data process. In practice, the set of possible auxiliary variables is often too large. We propose using principal components analysis (PCA) to reduce the number of possible auxiliary variables to a manageable number. A series of Monte Carlo simulations compared the performance of the inclusive strategy with eight auxiliary variables (inclusive approach) to the PCA strategy using just one principal component derived from the eight original variables (PCA approach). We examined the influence of four independent variables: magnitude of correlations, rate of missing data, missing data mechanism, and sample size on parameter bias, root mean squared error, and confidence interval coverage. Results indicate that the PCA approach results in unbiased parameter estimates and potentially more accuracy than the inclusive approach. We conclude that using the PCA strategy to reduce the number of auxiliary variables is an effective and practical way to reap the benefits of the inclusive strategy in the presence of many possible auxiliary variables.

  5. Haplotype and missing data inference in nuclear families.

    PubMed

    Lin, Shin; Chakravarti, Aravinda; Cutler, David J

    2004-08-01

    Determining linkage phase from population samples with statistical methods is accurate only within regions of high linkage disequilibrium (LD). Yet, affected individuals in a genetic mapping study, including those involving cases and controls, may share sequences identical-by-descent stretching on the order of 10s to 100s of kilobases, quite possibly over regions of low LD in the population. At the same time, inferring phase from nuclear families may be hampered by missing family members, missing genotypes, and the noninformativity of certain genotype patterns. In this study, we reformulate our previous haplotype reconstruction algorithm, and its associated computer program, to phase parents with information derived from population samples as well as from their offspring. In applications of our algorithm to 100-kb stretches, simulated in accordance to a Wright-Fisher model with typical levels of LD in humans, we find that phase reconstruction for 160 trios with 10% missing data is highly accurate (>90%) over the entire length. Furthermore, our algorithm can estimate allelic status for missing data at high accuracy (>95%). Finally, the input capacity of the program is vast, easily handling thousands of segregating sites in > or = 1000 chromosomes.

  6. Annual Coded Wire Program Missing Production Groups, 1996 Annual Report.

    SciTech Connect

    Pastor, S.M.

    1997-07-01

    In 1989 the Bonneville Power Administration (BPA) began funding the evaluation of production groups of juvenile anadromous fish not being coded-wire tagged for other programs. These groups were the ``Missing Production Groups``. Production fish released by the US Fish and Wildlife Service (USFWS) without representative coded-wire tags during the 1980`s are indicated as blank spaces on the survival graphs in this report. The objectives of the ``Missing Production Groups`` program are: to estimate the total survival of each production group, to estimate the contribution of each production group to various fisheries, and to prepare an annual report for all USFWS hatcheries in the Columbia River basin. Coded-wire tag recovery information will be used to evaluate the relative success of individual brood stocks. This information can also be used by salmon harvest managers to develop plans to allow the harvest of excess hatchery fish while protecting threatened, endangered, or other stocks of concern.

  7. Noisy covariance matrices and portfolio optimization II

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2003-03-01

    Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the

  8. LSimpute: accurate estimation of missing values in microarray data with least squares methods.

    PubMed

    Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge

    2004-02-20

    Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as

  9. AFCI-2.0 Library of Neutron Cross Section Covariances

    SciTech Connect

    Herman, M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.; Hoblit,S.; Mughabghab,S.F.; Sonzogni,A.; Talou,P.; Chadwick,M.B.; Hale.G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G.

    2011-06-26

    Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular new LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.

  10. Spatially covariant theories of a transverse, traceless graviton: Formalism

    NASA Astrophysics Data System (ADS)

    Khoury, Justin; Miller, Godfrey E. J.; Tolley, Andrew J.

    2012-04-01

    General relativity is a generally covariant, locally Lorentz covariant theory of two transverse, traceless graviton degrees of freedom. According to a theorem of Hojman, Kuchař, and Teitelboim, modifications of general relativity must either introduce new degrees of freedom or violate the principle of local Lorentz covariance. In this paper, we explore modifications of general relativity that retain the same graviton degrees of freedom, and therefore explicitly break Lorentz covariance. Motivated by cosmology, the modifications of interest maintain explicit spatial covariance. In spatially covariant theories of the graviton, the physical Hamiltonian density obeys an analogue of the renormalization group equation which encodes invariance under flow through the space of conformally equivalent spatial metrics. This paper is dedicated to setting up the formalism of our approach and applying it to a realistic class of theories. Forthcoming work will apply the formalism more generally.

  11. Evaluation of the Covariance Matrix of Estimated Resonance Parameters

    NASA Astrophysics Data System (ADS)

    Becker, B.; Capote, R.; Kopecky, S.; Massimi, C.; Schillebeeckx, P.; Sirakov, I.; Volev, K.

    2014-04-01

    In the resonance region nuclear resonance parameters are mostly obtained by a least square adjustment of a model to experimental data. Derived parameters can be mutually correlated through the adjustment procedure as well as through common experimental or model uncertainties. In this contribution we investigate four different methods to propagate the additional covariance caused by experimental or model uncertainties into the evaluation of the covariance matrix of the estimated parameters: (1) including the additional covariance into the experimental covariance matrix based on calculated or theoretical estimates of the data; (2) including the uncertainty affected parameter in the adjustment procedure; (3) evaluation of the full covariance matrix by Monte Carlo sampling of the common parameter; and (4) retroactively including the additional covariance by using the marginalization procedure of Habert et al.

  12. Tackling missing data in community health studies using additive LS-SVM classifier.

    PubMed

    Wang, Guanjin; Deng, Zhaohong; Choi, Kup-Sze

    2016-12-01

    Missing data is a common issue in community health and epidemiological studies. Direct removal of samples with missing data can lead to reduced sample size and information bias, which deteriorates the significance of the results. While data imputation methods are available to deal with missing data, they are limited in performance and could introduce noises into the dataset. Instead of data imputation, a novel method based on additive least square support vector machine (LS-SVM) is proposed in this paper for predictive modeling when the input features of the model contain missing data. The method also determines simultaneously the influence of the features with missing values on the classification accuracy using the fast leave-one-out cross-validation strategy. The performance of the method is evaluated by applying it to predict the quality of life (QOL) of elderly people using health data collected in the community. The dataset involves demographics, socioeconomic status, health history and the outcomes of health assessments of 444 community-dwelling elderly people, with 5% to 60% of data missing in some of the input features. The QOL is measured using a standard questionnaire of the World Health Organization. Results show that the proposed method outperforms four conventional methods for handling missing data - case deletion, feature deletion, mean imputation and K-nearest neighbor imputation, with the average QOL prediction accuracy reaching 0.7418. It is potentially a promising technique for tackling missing data in community health research and other applications.

  13. Power series evaluation of transition and covariance matrices.

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1972-01-01

    Reexamination power series solutions to the matrix covariance differential equation and the transition differential equation. Truncation error bounds are derived which are computationally attractive and which extend previous results. Polynomial approximations are obtained by exploiting the functional equations satisfied by the transition and covariance matrices. The series-functional equation propagation technique represents a fast and accurate alternative to the numerical integration of the time-invariant transition and covariance equations.

  14. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  15. Infilling missing hydrological data - methods and consequences

    NASA Astrophysics Data System (ADS)

    Bardossy, A.; Pegram, G. G.

    2013-12-01

    Hydrological observations are often incomplete - equipment malfunction, transmission errors and other technical problems lead to unwanted gaps in observation time series. Furthermore, due to financial and organizational problems, many observation networks are in continuous decline. As an ameliorating stratagem, short time gaps can be filled using information from other locations. The statistics of abandoned stations provide useful information for the process of extending records. In this contribution the authors present different methods for infilling gaps using: - nearest neighbours - simple and multiple linear regression - black box methods (fuzzy and neural nets) - Expectation Maximization - Copula based estimation The methods are used at different time scales for infilling precipitation from daily through pentads and months to years. The copula based estimation provides not only an estimator for the expected value, but also a probability distribution for each of the missing values. Thus the method can be used for conditional simulation of realizations. Observed precipitation data from the Cape region in South Africa are used to illustrate the intercomparison of the methodologies. The consequences of using [or not using] infilling and data extension are illustrated using a hydrological modelling example from South-West Germany.

  16. The relationship between electronic nursing care reminders and missed nursing care.

    PubMed

    Piscotty, Ronald J; Kalisch, Beatrice

    2014-10-01

    The purpose of the study was to explore relationships between nurses' perceptions of the impact of health information technology on their clinical practice in the acute care setting, their use of electronic nursing care reminders, and episodes of missed nursing care. The study aims were accomplished with a descriptive design using adjusted correlations. A convenience sample (N = 165) of medical and/or surgical, intensive care, and intermediate care RNs working on acute care hospital units participated in the study. Nurses from 19 eligible nursing units were invited to participate. Adjusted relationships using hierarchical multiple regression analyses indicated significant negative relationships between missed nursing care and nursing care reminders and perceptions of health information technology. The adjusted correlations support the hypotheses that there is a relationship between nursing care reminder usage and missed nursing care and a relationship between health information technology and missed nursing care. The relationships are negative, indicating that nurses who rate higher levels of reminder usage and health information technology have decreased reports of missed nursing care. The study found a significant relationship between nursing care reminders usage and decreased amounts of missed nursing care. The findings can be used in a variety of improvement endeavors, such as encouraging nurses to utilize nursing care reminders, aid information system designers when designing nursing care reminders, and assist healthcare organizations in assessing the impact of technology on nursing practice.

  17. EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.

    SciTech Connect

    HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.

    2007-04-22

    The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.

  18. Covariance of lucky images: performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2017-01-01

    The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  19. Conformal killing tensors and covariant Hamiltonian dynamics

    SciTech Connect

    Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.

    2014-12-15

    A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.

  20. Covariant generalization of cosmological perturbation theory

    SciTech Connect

    Enqvist, Kari; Hoegdahl, Janne; Nurmi, Sami; Vernizzi, Filippo

    2007-01-15

    We present an approach to cosmological perturbations based on a covariant perturbative expansion between two worldlines in the real inhomogeneous universe. As an application, at an arbitrary order we define an exact scalar quantity which describes the inhomogeneities in the number of e-folds on uniform density hypersurfaces and which is conserved on all scales for a barotropic ideal fluid. We derive a compact form for its conservation equation at all orders and assign it a simple physical interpretation. To make a comparison with the standard perturbation theory, we develop a method to construct gauge-invariant quantities in a coordinate system at arbitrary order, which we apply to derive the form of the nth order perturbation in the number of e-folds on uniform density hypersurfaces and its exact evolution equation. On large scales, this provides the gauge-invariant expression for the curvature perturbation on uniform density hypersurfaces and its evolution equation at any order.

  1. Covariates of Craving in Actively Drinking Alcoholics

    PubMed Central

    Chakravorty, Subhajit; Kuna, Samuel T.; Zaharakis, Nikola; O’Brien, Charles P.; Kampman, Kyle M.; Oslin, David

    2010-01-01

    The goal of this cross-sectional study was to assess the relationship of alcohol craving with biopsychosocial and addiction factors that are clinically pertinent to alcoholism treatment. Alcohol craving was assessed in 315 treatment-seeking, alcohol dependent subjects using the PACS questionnaire. Standard validated questionnaires were used to evaluate a variety of biological, addiction, psychological, psychiatric, and social factors. Individual covariates of craving included age, race, problematic consequences of drinking, heavy drinking, motivation for change, mood disturbance, sleep problems, and social supports. In a multivariate analysis (R2 = .34), alcohol craving was positively associated with mood disturbance, heavy drinking, readiness for change, and negatively associated with age. The results from this study suggest that alcohol craving is a complex phenomenon influenced by multiple factors. PMID:20716308

  2. Unknown input and state estimation for linear discrete-time systems with missing measurements and correlated noises

    NASA Astrophysics Data System (ADS)

    Shu, Huisheng; Zhang, Sijing; Shen, Bo; Liu, Yurong

    2016-07-01

    This paper is concerned with the problem of simultaneous input and state estimation for a class of linear discrete-time systems with missing measurements and correlated noises. The missing measurements occur in a random way and are governed by a series of mutually independent random variables obeying a certain Bernoulli distribution. The process and measurement noises under consideration are correlated at the same time instant. Our attention is focused on the design of recursive estimators for both input and state such that, for all missing measurements and correlated noises, the estimators are unbiased and the estimation error covariances are minimized. This objective is achieved using direct algebraic operation and the design algorithm for the desired estimators is given. Finally, an illustrative example is presented to demonstrate the effectiveness of the proposed design scheme.

  3. Methods for Handling Missing Secondary Respondent Data

    ERIC Educational Resources Information Center

    Young, Rebekah; Johnson, David

    2013-01-01

    Secondary respondent data are underutilized because researchers avoid using these data in the presence of substantial missing data. The authors reviewed, evaluated, and tested solutions to this problem. Five strategies of dealing with missing partner data were reviewed: (a) complete case analysis, (b) inverse probability weighting, (c) correction…

  4. Modeling Nonignorable Missing Data in Speeded Tests

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Pimentel, Jonald L.

    2008-01-01

    In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IRT) models: one for the observed response data and…

  5. Comparison of floating chamber and eddy covariance measurements of lake greenhouse gas fluxes

    NASA Astrophysics Data System (ADS)

    Podgrajsek, E.; Sahlée, E.; Bastviken, D.; Holst, J.; Lindroth, A.; Tranvik, L.; Rutgersson, A.

    2013-11-01

    Fluxes of carbon dioxide (CO2) and methane (CH4) from lakes may have a large impact on the magnitude of the terrestrial carbon sink. Traditionally lake fluxes have been measured using the floating chambers (FC) technique, however, several recent studies use the eddy covariance (EC) method. We present simultaneous flux measurements using both methods at the lake Tämnaren in Sweden during field campaigns in 2011 and 2012. Only very few similar studies exist. For CO2 flux, the two methods agree relatively well during some periods, but deviate substantially at other times. The large discrepancies might be caused by heterogeneity of partial pressure of CO2 (pCO2w) in the EC flux footprint. The methods agree better for CH4 fluxes, it is, however, clear that short-term discontinuous FC measurements are likely to miss important high flux events.

  6. Comparison of floating chamber and eddy covariance measurements of lake greenhouse gas fluxes

    NASA Astrophysics Data System (ADS)

    Podgrajsek, E.; Sahlée, E.; Bastviken, D.; Holst, J.; Lindroth, A.; Tranvik, L.; Rutgersson, A.

    2014-08-01

    Fluxes of carbon dioxide (CO2) and methane (CH4) from lakes may have a large impact on the magnitude of the terrestrial carbon sink. Traditionally lake fluxes have been measured using the floating chamber (FC) technique; however, several recent studies use the eddy covariance (EC) method. We present simultaneous flux measurements using both methods at lake Tämnaren in Sweden during field campaigns in 2011 and 2012. Only very few similar studies exist. For CO2 flux, the two methods agree relatively well during some periods, but deviate substantially at other times. The large discrepancies might be caused by heterogeneity of partial pressure of CO2 (pCO2w) in the EC flux footprint. The methods agree better for CH4 fluxes. It is, however, clear that short-term discontinuous FC measurements are likely to miss important high flux events.

  7. Tests of homoscedasticity, normality, and missing completely at random for incomplete multivariate data.

    PubMed

    Jamshidian, Mortaza; Jalal, Siavash

    2010-12-01

    Test of homogeneity of covariances (or homoscedasticity) among several groups has many applications in statistical analysis. In the context of incomplete data analysis, tests of homoscedasticity among groups of cases with identical missing data patterns have been proposed to test whether data are missing completely at random (MCAR). These tests of MCAR require large sample sizes n and/or large group sample sizes n(i), and they usually fail when applied to non-normal data. Hawkins (1981) proposed a test of multivariate normality and homoscedasticity that is an exact test for complete data when n(i) are small. This paper proposes a modification of this test for complete data to improve its performance, and extends its application to test of homoscedasticity and MCAR when data are multivariate normal and incomplete. Moreover, it is shown that the statistic used in the Hawkins test in conjunction with a nonparametric k-sample test can be used to obtain a nonparametric test of homoscedasticity that works well for both normal and non-normal data. It is explained how a combination of the proposed normal-theory Hawkins test and the nonparametric test can be employed to test for homoscedasticity, MCAR, and multivariate normality. Simulation studies show that the newly proposed tests generally outperform their existing competitors in terms of Type I error rejection rates. Also, a power study of the proposed tests indicates good power. The proposed methods use appropriate missing data imputations to impute missing data. Methods of multiple imputation are described and one of the methods is employed to confirm the result of our single imputation methods. Examples are provided where multiple imputation enables one to identify a group or groups whose covariance matrices differ from the majority of other groups.

  8. Performance of internal covariance estimators for cosmic shear correlation functions

    SciTech Connect

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.

  9. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGES

    Friedrich, O.; Seitz, S.; Eifler, T. F.; ...

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  10. Simultaneous Multiple Response Regression and Inverse Covariance Matrix Estimation via Penalized Gaussian Maximum Likelihood.

    PubMed

    Lee, Wonyul; Liu, Yufeng

    2012-10-01

    Multivariate regression is a common statistical tool for practical problems. Many multivariate regression techniques are designed for univariate response cases. For problems with multiple response variables available, one common approach is to apply the univariate response regression technique separately on each response variable. Although it is simple and popular, the univariate response approach ignores the joint information among response variables. In this paper, we propose three new methods for utilizing joint information among response variables. All methods are in a penalized likelihood framework with weighted L(1) regularization. The proposed methods provide sparse estimators of conditional inverse co-variance matrix of response vector given explanatory variables as well as sparse estimators of regression parameters. Our first approach is to estimate the regression coefficients with plug-in estimated inverse covariance matrices, and our second approach is to estimate the inverse covariance matrix with plug-in estimated regression parameters. Our third approach is to estimate both simultaneously. Asymptotic properties of these methods are explored. Our numerical examples demonstrate that the proposed methods perform competitively in terms of prediction, variable selection, as well as inverse covariance matrix estimation.

  11. A flexible model for association analysis in sibships with missing genotype data.

    PubMed

    Dudbridge, Frank; Holmans, Peter A; Wilson, Scott G

    2011-05-01

    A common design in family-based association studies consists of siblings without parents. Several methods have been proposed for analysis of sibship data, but they mostly do not allow for missing data, such as haplotype phase or untyped markers. On the other hand, general methods for nuclear families with missing data are computationally intensive when applied to sibships, since every family has missing parents that could have many possible genotypes. We propose a computationally efficient model for sibships by conditioning on the sets of alleles transmitted into the sibship by each parent. This means that the likelihood can be written only in terms of transmitted alleles and we do not have to sum over all possible untransmitted alleles when they cannot be deduced from the siblings. The model naturally accommodates missing data and admits standard theory of estimation, testing, and inclusion of covariates. Our model is quite robust to population stratification and can test for association in the presence of linkage. We show that our model has similar power to FBAT for single marker analysis and improved power for haplotype analysis. Compared to summing over all possible untransmitted alleles, we achieve similar power with considerable reductions in computation time.

  12. Structural covariance of neostriatal and limbic regions in patients with obsessive–compulsive disorder

    PubMed Central

    Subirà, Marta; Cano, Marta; de Wit, Stella J.; Alonso, Pino; Cardoner, Narcís; Hoexter, Marcelo Q.; Kwon, Jun Soo; Nakamae, Takashi; Lochner, Christine; Sato, João R.; Jung, Wi Hoon; Narumoto, Jin; Stein, Dan J.; Pujol, Jesus; Mataix-Cols, David; Veltman, Dick J.; Menchón, José M.; van den Heuvel, Odile A.; Soriano-Mas, Carles

    2016-01-01

    Background Frontostriatal and frontoamygdalar connectivity alterations in patients with obsessive–compulsive disorder (OCD) have been typically described in functional neuroimaging studies. However, structural covariance, or volumetric correlations across distant brain regions, also provides network-level information. Altered structural covariance has been described in patients with different psychiatric disorders, including OCD, but to our knowledge, alterations within frontostriatal and frontoamygdalar circuits have not been explored. Methods We performed a mega-analysis pooling structural MRI scans from the Obsessive–compulsive Brain Imaging Consortium and assessed whole-brain voxel-wise structural covariance of 4 striatal regions (dorsal and ventral caudate nucleus, and dorsal-caudal and ventral-rostral putamen) and 2 amygdalar nuclei (basolateral and centromedial-superficial). Images were preprocessed with the standard pipeline of voxel-based morphometry studies using Statistical Parametric Mapping software. Results Our analyses involved 329 patients with OCD and 316 healthy controls. Patients showed increased structural covariance between the left ventral-rostral putamen and the left inferior frontal gyrus/frontal operculum region. This finding had a significant interaction with age; the association held only in the subgroup of older participants. Patients with OCD also showed increased structural covariance between the right centromedial-superficial amygdala and the ventromedial prefrontal cortex. Limitations This was a cross-sectional study. Because this is a multisite data set analysis, participant recruitment and image acquisition were performed in different centres. Most patients were taking medication, and treatment protocols differed across centres. Conclusion Our results provide evidence for structural network–level alterations in patients with OCD involving 2 frontosubcortical circuits of relevance for the disorder and indicate that structural

  13. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  14. Handling Missing Data in Research Studies of Instructional Technology.

    ERIC Educational Resources Information Center

    Oh, Jeong-Eun

    Missing data is an important issue that is discussed across many fields. In order to understand the issues caused by missing data, this paper reviews the types of missing data and problems caused by missing data. Also, to understand how missing data are handled in instructional technology research, articles published in "Educational Media…

  15. Missing Data in Substance Abuse Treatment Research: Current Methods and Modern Approaches

    PubMed Central

    McPherson, Sterling; Barbosa-Leiker, Celestina; Burns, G. Leonard; Howell, Donelle; Roll, John

    2013-01-01

    Two common procedures for the treatment of missing information, listwise deletion and positive urine analysis (UA) imputation (e.g., if the participant fails to provide urine for analysis, then score the UA positive), may result in significant biases during the interpretation of treatment effects. To compare these approaches and to offer a possible alternative, these two procedures were compared to the multiple imputation (MI) procedure with publicly available data from a recent clinical trial. Listwise deletion, single imputation (i.e., positive UA imputation), and MI missing data procedures were used to comparatively examine the effect of two different buprenorphine/naloxone tapering schedules (7- or 28-days) for opioid addiction on the likelihood of a positive UA (Clinical Trial Network 0003; Ling et al., 2009). The listwise deletion of missing data resulted in a nonsignificant effect for the taper while the positive UA imputation procedure resulted in a significant effect, replicating the original findings by Ling et al. (2009). Although the MI procedure also resulted in a significant effect, the effect size was meaningfully smaller and the standard errors meaningfully larger when compared to the positive UA procedure. This study demonstrates that the researcher can obtain markedly different results depending on how the missing data are handled. Missing data theory suggests that listwise deletion and single imputation procedures should not be used to account for missing information, and that MI has advantages with respect to internal and external validity when the assumption of missing at random can be reasonably supported. PMID:22329556

  16. Covariant Spectator Theory: Foundations and Applications A Mini-Review of the Covariant Spectator Theory

    SciTech Connect

    Alfred Stadler, Franz Gross

    2010-10-01

    We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.

  17. Methods for estimation of covariance matrices and covariance components for the Hanford Waste Vitrification Plant Process

    SciTech Connect

    Bryan, M.F.; Piepel, G.F.; Simpson, D.B.

    1996-03-01

    The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.

  18. Assessing Trait Covariation and Morphological Integration on Phylogenies Using Evolutionary Covariance Matrices

    PubMed Central

    Adams, Dean C.; Felice, Ryan N.

    2014-01-01

    Morphological integration describes the degree to which sets of organismal traits covary with one another. Morphological covariation may be evaluated at various levels of biological organization, but when characterizing such patterns across species at the macroevolutionary level, phylogeny must be taken into account. We outline an analytical procedure based on the evolutionary covariance matrix that allows species-level patterns of morphological integration among structures defined by sets of traits to be evaluated while accounting for the phylogenetic relationships among taxa, providing a flexible and robust complement to related phylogenetic independent contrasts based approaches. Using computer simulations under a Brownian motion model we show that statistical tests based on the approach display appropriate Type I error rates and high statistical power for detecting known levels of integration, and these trends remain consistent for simulations using different numbers of species, and for simulations that differ in the number of trait dimensions. Thus, our procedure provides a useful means of testing hypotheses of morphological integration in a phylogenetic context. We illustrate the utility of this approach by evaluating evolutionary patterns of morphological integration in head shape for a lineage of Plethodon salamanders, and find significant integration between cranial shape and mandible shape. Finally, computer code written in R for implementing the procedure is provided. PMID:24728003

  19. Contextualized Network Analysis: Theory and Methods for Networks with Node Covariates

    NASA Astrophysics Data System (ADS)

    Binkiewicz, Norbert M.

    Biological and social systems consist of myriad interacting units. The interactions can be intuitively represented in the form of a graph or network. Measurements of these graphs can reveal the underlying structure of these interactions, which provides insight into the systems that generated the graphs. Moreover, in applications such as neuroconnectomics, social networks, and genomics, graph data is accompanied by contextualizing measures on each node. We leverage these node covariates to help uncover latent communities, using a modification of spectral clustering. Statistical guarantees are provided under a joint mixture model called the node contextualized stochastic blockmodel, including a bound on the mis-clustering rate. For most simulated conditions, covariate assisted spectral clustering yields superior results relative to both regularized spectral clustering without node covariates and an adaptation of canonical correlation analysis. We apply covariate assisted spectral clustering to large brain graphs derived from diffusion MRI, using the node locations or neurological regions as covariates. In both cases, covariate assisted spectral clustering yields clusters that are easier to interpret neurologically. A low rank update algorithm is developed to reduce the computational cost of determining the tuning parameter for covariate assisted spectral clustering. As simulations demonstrate, the low rank update algorithm increases the speed of covariate assisted spectral clustering up to ten-fold, while practically matching the clustering performance of the standard algorithm. Graphs with node attributes are sometimes accompanied by ground truth labels that align closely with the latent communities in the graph. We consider the example of a mouse retina neuron network accompanied by the neuron spatial location and neuronal cell types. In this example, the neuronal cell type is considered a ground truth label. Current approaches for defining neuronal cell type vary

  20. A review of the handling of missing longitudinal outcome data in clinical trials.

    PubMed

    Powney, Matthew; Williamson, Paula; Kirkham, Jamie; Kolamunnage-Dona, Ruwanthi

    2014-06-19

    The aim of this review was to establish the frequency with which trials take into account missingness, and to discover what methods trialists use for adjustment in randomised controlled trials with longitudinal measurements. Failing to address the problems that can arise from missing outcome data can result in misleading conclusions. Missing data should be addressed as a means of a sensitivity analysis of the complete case analysis results. One hundred publications of randomised controlled trials with longitudinal measurements were selected randomly from trial publications from the years 2005 to 2012. Information was extracted from these trials, including whether reasons for dropout were reported, what methods were used for handing the missing data, whether there was any explanation of the methods for missing data handling, and whether a statistician was involved in the analysis. The main focus of the review was on missing data post dropout rather than missing interim data. Of all the papers in the study, 9 (9%) had no missing data. More than half of the papers included in the study failed to make any attempt to explain the reasons for their choice of missing data handling method. Of the papers with clear missing data handling methods, 44 papers (50%) used adequate methods of missing data handling, whereas 30 (34%) of the papers used missing data methods which may not have been appropriate. In the remaining 17 papers (19%), it was difficult to assess the validity of the methods used. An imputation method was used in 18 papers (20%). Multiple imputation methods were introduced in 1987 and are an efficient way of accounting for missing data in general, and yet only 4 papers used these methods. Out of the 18 papers which used imputation, only 7 displayed the results as a sensitivity analysis of the complete case analysis results. 61% of the papers that used an imputation explained the reasons for their chosen method. Just under a third of the papers made no reference

  1. Missed nursing care: errors of omission.

    PubMed

    Kalisch, Beatrice J; Landstrom, Gay; Williams, Reg Arthur

    2009-01-01

    This study examines what and why nursing care is missed. A sample of 459 nurses in 3 hospitals completed the Missed Nursing Care (MISSCARE) Survey. Assessment was reported to be missed by 44% of respondents while interventions, basic care, and planning were reported to be missed by > 70% of the survey respondents. Reasons for missed care were labor resources (85%), material resources (56%), and communication (38%). A comparison of the hospitals showed consistency across all 3 hospitals. Associate degree nurses reported more missed care than baccalaureate-prepared and diploma-educated nurses. The results of this study lead to the conclusion that a large proportion of all hospitalized patients are being placed in jeopardy because of missed nursing care or errors of omission. Furthermore, changes in Center for Medicare and Medicaid Services (CMS) regulations which will eliminate payment for acute care services when any one of a common set of complications occurs, such as pressure ulcers and patient falls, point to serious cost implications for hospitals.

  2. Computer codes for checking, plotting and processing of neutron cross-section covariance data and their application

    SciTech Connect

    Sartori, E.

    1992-12-31

    This paper presents a brief review of computer codes concerned with checking, plotting, processing and using of covariances of neutron cross-section data. It concentrates on those available from the computer code information centers of the United States and the OECD/Nuclear Energy Agency. Emphasis will be placed also on codes using covariances for specific applications such as uncertainty analysis, data adjustment and data consistency analysis. Recent evaluations contain neutron cross section covariance information for all isotopes of major importance for technological applications of nuclear energy. It is therefore important that the available software tools needed for taking advantage of this information are widely known as hey permit the determination of better safety margins and allow the optimization of more economic, I designs of nuclear energy systems.

  3. 40 CFR 98.245 - Procedures for estimating missing data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... estimating missing data. For missing feedstock flow rates, product flow rates, and carbon contents, use the same procedures as for missing flow rates and carbon contents for fuels as specified in § 98.35....

  4. Conditional Covariance Theory and Detect for Polytomous Items

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2007-01-01

    This paper extends the theory of conditional covariances to polytomous items. It has been proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, given an appropriately chosen composite is positive if, and only if, the two items…

  5. Perturbative approach to covariance matrix of the matter power spectrum

    SciTech Connect

    Mohammed, Irshad; Seljak, Uros; Vlah, Zvonimir

    2016-06-30

    We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (beat coupling or super-sample variance), and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10\\% level up to $k \\sim 1 h {\\rm Mpc^{-1}}$. We show that all the connected components are dominated by the large-scale modes ($k<0.1 h {\\rm Mpc^{-1}}$), regardless of the value of the wavevectors $k,\\, k'$ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher $k$ it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.

  6. Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Lee, Taehun; Cai, Li

    2012-01-01

    Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…

  7. Covariate-Based Assignment to Treatment Groups: Some Simulation Results.

    ERIC Educational Resources Information Center

    Jain, Ram B.; Hsu, Tse-Chi

    1980-01-01

    Six estimators of treatment effect when assignment to treatment groups is based on the covariate are compared in terms of empirical standard errors and percent relative bias. Results show that simple analysis of covariance estimator is not always appropriate. (Author/GK)

  8. Handling Correlations between Covariates and Random Slopes in Multilevel Models

    ERIC Educational Resources Information Center

    Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders

    2014-01-01

    This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…

  9. Performance of internal covariance estimators for cosmic shear correlation functions

    NASA Astrophysics Data System (ADS)

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2016-03-01

    Data re-sampling methods such as delete-one jackknife, bootstrap or the sub-sample covariance are common tools for estimating the covariance of large-scale structure probes. We investigate different implementations of these methods in the context of cosmic shear two-point statistics. Using lognormal simulations of the convergence field and the corresponding shear field we generate mock catalogues of a known and realistic covariance. For a survey of {˜ } 5000 ° ^2 we find that jackknife, if implemented by deleting sub-volumes of galaxies, provides the most reliable covariance estimates. Bootstrap, in the common implementation of drawing sub-volumes of galaxies, strongly overestimates the statistical uncertainties. In a forecast for the complete 5-yr Dark Energy Survey, we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the Ωm-σ8 plane as measured with internally estimated covariance matrices is on average ≳85 per cent of the volume derived from the true covariance matrix. The uncertainty on the parameter combination Σ _8 ˜ σ _8 Ω _m^{0.5} derived from internally estimated covariances is ˜90 per cent of the true uncertainty.

  10. Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies

    ERIC Educational Resources Information Center

    Chen, Jianshen; Kaplan, David

    2015-01-01

    Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…

  11. Covariation Is a Poor Measure of Molecular Coevolution.

    PubMed

    Talavera, David; Lovell, Simon C; Whelan, Simon

    2015-09-01

    Recent developments in the analysis of amino acid covariation are leading to breakthroughs in protein structure prediction, protein design, and prediction of the interactome. It is assumed that observed patterns of covariation are caused by molecular coevolution, where substitutions at one site affect the evolutionary forces acting at neighboring sites. Our theoretical and empirical results cast doubt on this assumption. We demonstrate that the strongest coevolutionary signal is a decrease in evolutionary rate and that unfeasibly long times are required to produce coordinated substitutions. We find that covarying substitutions are mostly found on different branches of the phylogenetic tree, indicating that they are independent events that may or may not be attributable to coevolution. These observations undermine the hypothesis that molecular coevolution is the primary cause of the covariation signal. In contrast, we find that the pairs of residues with the strongest covariation signal tend to have low evolutionary rates, and that it is this low rate that gives rise to the covariation signal. Slowly evolving residue pairs are disproportionately located in the protein's core, which explains covariation methods' ability to detect pairs of residues that are close in three dimensions. These observations lead us to propose the "coevolution paradox": The strength of coevolution required to cause coordinated changes means the evolutionary rate is so low that such changes are highly unlikely to occur. As modern covariation methods may lead to breakthroughs in structural genomics, it is critical to recognize their biases and limitations.

  12. Empirical Performance of Covariates in Education Observational Studies

    ERIC Educational Resources Information Center

    Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate

    2017-01-01

    This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…

  13. Perturbative approach to covariance matrix of the matter power spectrum

    NASA Astrophysics Data System (ADS)

    Mohammed, Irshad; Seljak, Uroš; Vlah, Zvonimir

    2017-04-01

    We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (supersample variance) and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10 per cent level up to k ∼ 1 h Mpc-1. We show that all the connected components are dominated by the large-scale modes (k < 0.1 h Mpc-1), regardless of the value of the wave vectors k, k΄ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher k, it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.

  14. Choosing covariates in the analysis of clinical trials.

    PubMed

    Beach, M L; Meier, P

    1989-12-01

    Much of the literature on clinical trials emphasizes the importance of adjusting the results for any covariates (baseline variables) for which randomization fails to produce nearly exact balance, but the literature is very nearly devoid of recipes for assessing the consequences of such adjustments. Several years ago, Paul Canner presented an approximate expression for the effect of a covariate adjustment, and he considered its use in the selection of covariates. With the aid of Canner's equation, using both formal analysis and simulation, the impact of covariate adjustment is further explored. Unless tight control over the analysis plans is established in advance, covariate adjustment can lead to seriously misleading inferences. Illustrations from the clinical trials literature are provided.

  15. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  16. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  17. UDU/T/ covariance factorization for Kalman filtering

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1980-01-01

    There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.

  18. Cutaneous melanocytic lesions: do not miss the invisible gorilla.

    PubMed

    Prieto, Victor G

    2012-07-01

    Of all pathology fields, the analysis of melanocytic lesions has one of the highest rates of review for legal reasons, particularly regarding the distinction between nevus and melanoma. Among the most frequently involved are desmoplastic melanoma, nevoid melanoma, and Spitz nevus versus spitzoid melanoma. Therefore, it follows that pathologists and dermatopathologists should pay special attention when dealing with such type of lesions. This review article will emphasize a number of clinical, histologic, and immunohistochemical features we believe are essential when evaluating lesions whose differential diagnosis includes melanoma/nevus. Furthermore, we want to stress the importance of examining the entire slide within the context of all available information in order to not miss the invisible gorilla in the slide. Regarding this apparently bizarre choice to illustrate these problems (to not miss an invisible gorilla), we request the reader to continue reading this article to find out why.

  19. Missing... presumed at random: cost-analysis of incomplete data.

    PubMed

    Briggs, Andrew; Clark, Taane; Wolstenholme, Jane; Clarke, Philip

    2003-05-01

    When collecting patient-level resource use data for statistical analysis, for some patients and in some categories of resource use, the required count will not be observed. Although this problem must arise in most reported economic evaluations containing patient-level data, it is rare for authors to detail how the problem was overcome. Statistical packages may default to handling missing data through a so-called 'complete case analysis', while some recent cost-analyses have appeared to favour an 'available case' approach. Both of these methods are problematic: complete case analysis is inefficient and is likely to be biased; available case analysis, by employing different numbers of observations for each resource use item, generates severe problems for standard statistical inference. Instead we explore imputation methods for generating 'replacement' values for missing data that will permit complete case analysis using the whole data set and we illustrate these methods using two data sets that had incomplete resource use information.

  20. Nuclear Forensics Analysis with Missing and Uncertain Data

    DOE PAGES

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  1. Testing of NASA LaRC Materials under MISSE 6 and MISSE 7 Missions

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S.

    2009-01-01

    The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment for several months. MISSE missions provide an opportunity for developing space qualifiable materials. Two lasers and a few optical components from NASA Langley Research Center (LaRC) were included in the MISSE 6 mission for long term exposure. MISSE 6 items were characterized and packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. MISSE 6 was transported to the international Space Station (ISS) via STS 123 on March 11. 2008. The astronauts successfully attached the PEC to external handrails of the ISS and opened the PEC for long term exposure to the space environment. The current plan is to bring the MISSE 6 PEC back to the Earth via STS 128 mission scheduled for launch in August 2009. Currently, preparations for launching the MISSE 7 mission are progressing. Laser and lidar components assembled on a flight-worthy platform are included from NASA LaRC. MISSE 7 launch is scheduled to be launched on STS 129 mission. This paper will briefly review recent efforts on MISSE 6 and MISSE 7 missions at NASA Langley Research Center (LaRC).

  2. Estimation of Covariances on Prompt Fission Neutron Spectra and Impact of the PFNS Model on the Vessel Fluence

    NASA Astrophysics Data System (ADS)

    Berge, Léonie; Litaize, Olivier; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Pénéliau, Yannick; Regnier, David

    2016-02-01

    As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS) are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.

  3. Generalized Covariant Gyrokinetic Dynamics of Magnetoplasmas

    SciTech Connect

    Cremaschini, C.; Tessarotto, M.; Nicolini, P.; Beklemishev, A.

    2008-12-31

    A basic prerequisite for the investigation of relativistic astrophysical magnetoplasmas, occurring typically in the vicinity of massive stellar objects (black holes, neutron stars, active galactic nuclei, etc.), is the accurate description of single-particle covariant dynamics, based on gyrokinetic theory (Beklemishev et al., 1999-2005). Provided radiation-reaction effects are negligible, this is usually based on the assumption that both the space-time metric and the EM fields (in particular the magnetic field) are suitably prescribed and are considered independent of single-particle dynamics, while allowing for the possible presence of gravitational/EM perturbations driven by plasma collective interactions which may naturally arise in such systems. The purpose of this work is the formulation of a generalized gyrokinetic theory based on the synchronous variational principle recently pointed out (Tessarotto et al., 2007) which permits to satisfy exactly the physical realizability condition for the four-velocity. The theory here developed includes the treatment of nonlinear perturbations (gravitational and/or EM) characterized locally, i.e., in the rest frame of a test particle, by short wavelength and high frequency. Basic feature of the approach is to ensure the validity of the theory both for large and vanishing parallel electric field. It is shown that the correct treatment of EM perturbations occurring in the presence of an intense background magnetic field generally implies the appearance of appropriate four-velocity corrections, which are essential for the description of single-particle gyrokinetic dynamics.

  4. Holographic bound in covariant loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Tamaki, Takashi

    2016-07-01

    We investigate puncture statistics based on the covariant area spectrum in loop quantum gravity. First, we consider Maxwell-Boltzmann statistics with a Gibbs factor for punctures. We establish formulas which relate physical quantities such as horizon area to the parameter characterizing holographic degrees of freedom. We also perform numerical calculations and obtain consistency with these formulas. These results tell us that the holographic bound is satisfied in the large area limit and the correction term of the entropy-area law can be proportional to the logarithm of the horizon area. Second, we also consider Bose-Einstein statistics and show that the above formulas are also useful in this case. By applying the formulas, we can understand intrinsic features of Bose-Einstein condensate which corresponds to the case when the horizon area almost consists of punctures in the ground state. When this phenomena occurs, the area is approximately constant against the parameter characterizing the temperature. When this phenomena is broken, the area shows rapid increase which suggests the phase transition from quantum to classical area.

  5. General covariance from the quantum renormalization group

    NASA Astrophysics Data System (ADS)

    Shyam, Vasudev

    2017-03-01

    The quantum renormalization group (QRG) is a realization of holography through a coarse-graining prescription that maps the beta functions of a quantum field theory thought to live on the "boundary" of some space to holographic actions in the "bulk" of this space. A consistency condition will be proposed that translates into general covariance of the gravitational theory in the D +1 dimensional bulk. This emerges from the application of the QRG on a planar matrix field theory living on the D dimensional boundary. This will be a particular form of the Wess-Zumino consistency condition that the generating functional of the boundary theory needs to satisfy. In the bulk, this condition forces the Poisson bracket algebra of the scalar and vector constraints of the dual gravitational theory to close in a very specific manner, namely, the manner in which the corresponding constraints of general relativity do. A number of features of the gravitational theory will be fixed as a consequence of this form of the Poisson bracket algebra. In particular, it will require the metric beta function to be of the gradient form.

  6. Frame Indifferent (Truly Covariant) Formulation of Electrodynamics

    NASA Astrophysics Data System (ADS)

    Christov, Christo

    2010-10-01

    The Electromagnetic field is considered from the point of view of mechanics of continuum. It is shown that Maxwell's equations are mathematically strict corollaries form the equation of motions of an elastic incompressible liquid. If the concept of frame-indifference (material invariance) is applied to the model of elastic liquid, then the partial time derivatives have to be replaced by the convective time derivative in the momentum equations, and by the Oldroyd upper-convected derivative in the constitutive relation. The convective/convected terms involve the velocity at a point of the field, and as a result, when deriving the Maxwell form of the equations, one arrives at equations which contain both the terms of Maxwell's equation and the so-called laws of motional EMF: Faraday's, Oersted--Ampere's, and the Lorentz-force law. Thus a unification of the electromagnetism is achieved. Since the new model is frame indifferent, it is truly covariant in the sense that the governing system is invariant when changing to a coordinate frame that can accelerate or even deform in time.

  7. Predicting the risk of toxic blooms of golden alga from cell abundance and environmental covariates

    USGS Publications Warehouse

    Patino, Reynaldo; VanLandeghem, Matthew M.; Denny, Shawn

    2016-01-01

    Golden alga (Prymnesium parvum) is a toxic haptophyte that has caused considerable ecological damage to marine and inland aquatic ecosystems worldwide. Studies focused primarily on laboratory cultures have indicated that toxicity is poorly correlated with the abundance of golden alga cells. This relationship, however, has not been rigorously evaluated in the field where environmental conditions are much different. The ability to predict toxicity using readily measured environmental variables and golden alga abundance would allow managers rapid assessments of ichthyotoxicity potential without laboratory bioassay confirmation, which requires additional resources to accomplish. To assess the potential utility of these relationships, several a priori models relating lethal levels of golden alga ichthyotoxicity to golden alga abundance and environmental covariates were constructed. Model parameters were estimated using archived data from four river basins in Texas and New Mexico (Colorado, Brazos, Red, Pecos). Model predictive ability was quantified using cross-validation, sensitivity, and specificity, and the relative ranking of environmental covariate models was determined by Akaike Information Criterion values and Akaike weights. Overall, abundance was a generally good predictor of ichthyotoxicity as cross validation of golden alga abundance-only models ranged from ∼ 80% to ∼ 90% (leave-one-out cross-validation). Environmental covariates improved predictions, especially the ability to predict lethally toxic events (i.e., increased sensitivity), and top-ranked environmental covariate models differed among the four basins. These associations may be useful for monitoring as well as understanding the abiotic factors that influence toxicity during blooms.

  8. Inventory Uncertainty Quantification using TENDL Covariance Data in Fispact-II

    SciTech Connect

    Eastwood, J.W.; Morgan, J.G.; Sublet, J.-Ch.

    2015-01-15

    The new inventory code Fispact-II provides predictions of inventory, radiological quantities and their uncertainties using nuclear data covariance information. Central to the method is a novel fast pathways search algorithm using directed graphs. The pathways output provides (1) an aid to identifying important reactions, (2) fast estimates of uncertainties, (3) reduced models that retain important nuclides and reactions for use in the code's Monte Carlo sensitivity analysis module. Described are the methods that are being implemented for improving uncertainty predictions, quantification and propagation using the covariance data that the recent nuclear data libraries contain. In the TENDL library, above the upper energy of the resolved resonance range, a Monte Carlo method in which the covariance data come from uncertainties of the nuclear model calculations is used. The nuclear data files are read directly by FISPACT-II without any further intermediate processing. Variance and covariance data are processed and used by FISPACT-II to compute uncertainties in collapsed cross sections, and these are in turn used to predict uncertainties in inventories and all derived radiological data.

  9. Correcting eddy-covariance flux underestimates over a grassland.

    SciTech Connect

    Twine, T. E.; Kustas, W. P.; Norman, J. M.; Cook, D. R.; Houser, P. R.; Meyers, T. P.; Prueger, J. H.; Starks, P. J.; Wesely, M. L.; Environmental Research; Univ. of Wisconsin at Madison; DOE; National Aeronautics and Space Administration; National Oceanic and Atmospheric Administrationoratory

    2000-06-08

    Independent measurements of the major energy balance flux components are not often consistent with the principle of conservation of energy. This is referred to as a lack of closure of the surface energy balance. Most results in the literature have shown the sum of sensible and latent heat fluxes measured by eddy covariance to be less than the difference between net radiation and soil heat fluxes. This under-measurement of sensible and latent heat fluxes by eddy-covariance instruments has occurred in numerous field experiments and among many different manufacturers of instruments. Four eddy-covariance systems consisting of the same models of instruments were set up side-by-side during the Southern Great Plains 1997 Hydrology Experiment and all systems under-measured fluxes by similar amounts. One of these eddy-covariance systems was collocated with three other types of eddy-covariance systems at different sites; all of these systems under-measured the sensible and latent-heat fluxes. The net radiometers and soil heat flux plates used in conjunction with the eddy-covariance systems were calibrated independently and measurements of net radiation and soil heat flux showed little scatter for various sites. The 10% absolute uncertainty in available energy measurements was considerably smaller than the systematic closure problem in the surface energy budget, which varied from 10 to 30%. When available-energy measurement errors are known and modest, eddy-covariance measurements of sensible and latent heat fluxes should be adjusted for closure. Although the preferred method of energy balance closure is to maintain the Bowen-ratio, the method for obtaining closure appears to be less important than assuring that eddy-covariance measurements are consistent with conservation of energy. Based on numerous measurements over a sorghum canopy, carbon dioxide fluxes, which are measured by eddy covariance, are underestimated by the same factor as eddy covariance evaporation

  10. A Bayesian Semiparametric Multivariate Causal Model, with Automatic Covariate Selection and for Possibly-Nonignorable Missing Data

    ERIC Educational Resources Information Center

    Karabatsos, G.; Walker, S.G.

    2010-01-01

    Causal inference is central to educational research, where in data analysis the aim is to learn the causal effects of educational treatments on academic achievement, to evaluate educational policies and practice. Compared to a correlational analysis, a causal analysis enables policymakers to make more meaningful statements about the efficacy of…

  11. Generation of covariance data among values from a single set of experiments

    SciTech Connect

    Smith, D.L.

    1992-12-01

    Modern nuclear data evaluation methods demand detailed uncertainty information for all input results to be considered. It can be shown from basic statistical principles that provision of a covariance matrix for a set of data provides the necessary information for its proper consideration in the context of other included experimental data and/or a priori representations of the physical parameters in question. This paper examines how an experimenter should go about preparing the covariance matrix for any single experimental data set he intends to report. The process involves detailed examination of the experimental procedures, identification of all error sources (both random and systematic); and consideration of any internal discrepancies. Some specific examples are given to illustrate the methods and principles involved.

  12. Generation of covariance data among values from a single set of experiments

    SciTech Connect

    Smith, D.L.

    1992-01-01

    Modern nuclear data evaluation methods demand detailed uncertainty information for all input results to be considered. It can be shown from basic statistical principles that provision of a covariance matrix for a set of data provides the necessary information for its proper consideration in the context of other included experimental data and/or a priori representations of the physical parameters in question. This paper examines how an experimenter should go about preparing the covariance matrix for any single experimental data set he intends to report. The process involves detailed examination of the experimental procedures, identification of all error sources (both random and systematic); and consideration of any internal discrepancies. Some specific examples are given to illustrate the methods and principles involved.

  13. High-resolution cortical dipole layer imaging based on noise covariance matrix.

    PubMed

    Hori, Junichi; Watanabe, Satoru

    2009-01-01

    We have investigated the suitable spatial filters for inverse estimation of cortical dipole imaging from the scalp electroencephalogram. The effects of incorporating statistical information of noise into inverse procedures were examined by computer simulations and experimental studies. The parametric projection filter (PPF) was applied to an inhomogeneous three-sphere volume conductor head model. The noise covariance matrix was estimated by applying independent component analysis (ICA) to the scalp potentials. Moreover, the sampling method of the noise information was examined for calculating the noise covariance matrix. The simulation results suggest that the spatial resolution was improved while the effect of noise was suppressed by including the separated noise at the time instant of imaging and by adjusting the number of samples according to the signal to noise ratio.

  14. A Covariance Analysis Tool for Assessing Fundamental Limits of SIM Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2007-01-01

    This paper presents a performance analysis of the instrument pointing control system for NASA's Space Interferometer Mission (SIM). SIM has a complex pointing system that uses a fast steering mirror in combination with a multirate control architecture to blend feed forward information with feedback information. A pointing covariance analysis tool (PCAT) is developed specifically to analyze systems with such complexity. The development of PCAT as a mathematical tool for covariance analysis is outlined in the paper. PCAT is then applied to studying performance of SIM's science pointing system. The analysis reveals and clearly delineates a fundamental limit that exists for SIM pointing performance. The limit is especially stringent for dim star targets. Discussion of the nature of the performance limit is provided, and methods are suggested to potentially improve pointing performance.

  15. MISSE 6-Testing Materials in Space

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S; Kinard, William H.

    2008-01-01

    The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment by placing them in space environment for several months. In this paper, a few materials and components from NASA Langley Research Center (LaRC) that have been flown on MISSE 6 mission will be discussed. These include laser and optical elements for photonic devices. The pre-characterized MISSE 6 materials were packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. Subsequently, the MISSE 6 PEC was transported by the STS-123 mission to International Space Station (ISS) on March 11, 2008. The astronauts successfully attached the PEC to external handrails and opened the PEC for long term exposure to the space environment.

  16. Some Activities of MISSE 6 Mission

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S.

    2009-01-01

    The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment for several months. In this paper, a few laser and optical elements from NASA Langley Research Center (LaRC) that have been flown on MISSE 6 mission will be discussed. These items were characterized and packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. Subsequently, the MISSE 6 PEC was transported by the STS-123 mission to International Space Station (ISS) on March 11, 2008. The astronauts successfully attached the PEC to external handrails and opened the PEC for long term exposure to the space environment. The plan is to retrieve the MISSE 6 PEC by STS-128 mission in August 2009.

  17. Discovery of a missing disease spreader

    NASA Astrophysics Data System (ADS)

    Maeno, Yoshiharu

    2011-10-01

    This study presents a method to discover an outbreak of an infectious disease in a region for which data are missing, but which is at work as a disease spreader. Node discovery for the spread of an infectious disease is defined as discriminating between the nodes which are neighboring to a missing disease spreader node, and the rest, given a dataset on the number of cases. The spread is described by stochastic differential equations. A perturbation theory quantifies the impact of the missing spreader on the moments of the number of cases. Statistical discriminators examine the mid-body or tail-ends of the probability density function, and search for the disturbance from the missing spreader. They are tested with computationally synthesized datasets, and applied to the SARS outbreak and flu pandemic.

  18. Plastic Surgeons Often Miss Patients' Mental Disorders

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_163120.html Plastic Surgeons Often Miss Patients' Mental Disorders 10 percent ... News) -- Nearly one in 10 patients seeking facial plastic surgery suffers from a mental illness that distorts ...

  19. Diet History Questionnaire II: Missing & Error Codes

    Cancer.gov

    A missing code indicates that the respondent skipped a question when a response was required. An error character indicates that the respondent marked two or more responses to a question where only one answer was appropriate.

  20. Clustering with Missing Values: No Imputation Required

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri

    2004-01-01

    Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.

  1. An introduction to modern missing data analyses.

    PubMed

    Baraldi, Amanda N; Enders, Craig K

    2010-02-01

    A great deal of recent methodological research has focused on two modern missing data analysis methods: maximum likelihood and multiple imputation. These approaches are advantageous to traditional techniques (e.g. deletion and mean imputation techniques) because they require less stringent assumptions and mitigate the pitfalls of traditional techniques. This article explains the theoretical underpinnings of missing data analyses, gives an overview of traditional missing data techniques, and provides accessible descriptions of maximum likelihood and multiple imputation. In particular, this article focuses on maximum likelihood estimation and presents two analysis examples from the Longitudinal Study of American Youth data. One of these examples includes a description of the use of auxiliary variables. Finally, the paper illustrates ways that researchers can use intentional, or planned, missing data to enhance their research designs.

  2. Missed Radiation Therapy and Cancer Recurrence

    Cancer.gov

    Patients who miss radiation therapy sessions during cancer treatment have an increased risk of their disease returning, even if they eventually complete their course of radiation treatment, according to a new study.

  3. Estimated Environmental Exposures for MISSE-7B

    NASA Technical Reports Server (NTRS)

    Finckenor, Miria M.; Moore, Chip; Norwood, Joseph K.; Henrie, Ben; DeGroh, Kim

    2012-01-01

    This paper details the 18-month environmental exposure for Materials International Space Station Experiment 7B (MISSE-7B) ram and wake sides. This includes atomic oxygen, ultraviolet radiation, particulate radiation, thermal cycling, meteoroid/space debris impacts, and observed contamination. Atomic oxygen fluence was determined by measured mass and thickness loss of polymers of known reactivity. Diodes sensitive to ultraviolet light actively measured solar radiation incident on the experiment. Comparisons to earlier MISSE flights are discussed.

  4. Winnicott and Lacan: a missed encounter?

    PubMed

    Vanier, Alain

    2012-04-01

    Winnicott was able to say that Lacan's paper on the mirror stage "had certainly influenced" him, while Lacan argued that he found his object a in Winnicott's transitional object. By following the development of their personal relations, as well as of their theoretical discussions, it is possible to argue that this was a missed encounter--yet a happily missed one, since the misunderstandings of their theoretical exchanges allowed each of them to clarify concepts otherwise difficult to discern.

  5. Defense genes missing from the flight division.

    PubMed

    Magor, Katharine E; Miranzo Navarro, Domingo; Barber, Megan R W; Petkau, Kristina; Fleming-Canepa, Ximena; Blyth, Graham A D; Blaine, Alysson H

    2013-11-01

    Birds have a smaller repertoire of immune genes than mammals. In our efforts to study antiviral responses to influenza in avian hosts, we have noted key genes that appear to be missing. As a result, we speculate that birds have impaired detection of viruses and intracellular pathogens. Birds are missing TLR8, a detector for single-stranded RNA. Chickens also lack RIG-I, the intracellular detector for single-stranded viral RNA. Riplet, an activator for RIG-I, is also missing in chickens. IRF3, the nuclear activator of interferon-beta in the RIG-I pathway is missing in birds. Downstream of interferon (IFN) signaling, some of the antiviral effectors are missing, including ISG15, and ISG54 and ISG56 (IFITs). Birds have only three antibody isotypes and IgD is missing. Ducks, but not chickens, make an unusual truncated IgY antibody that is missing the Fc fragment. Chickens have an expanded family of LILR leukocyte receptor genes, called CHIR genes, with hundreds of members, including several that encode IgY Fc receptors. Intriguingly, LILR homologues appear to be missing in ducks, including these IgY Fc receptors. The truncated IgY in ducks, and the duplicated IgY receptor genes in chickens may both have resulted from selective pressure by a pathogen on IgY FcR interactions. Birds have a minimal MHC, and the TAP transport and presentation of peptides on MHC class I is constrained, limiting function. Perhaps removing some constraint, ducks appear to lack tapasin, a chaperone involved in loading peptides on MHC class I. Finally, the absence of lymphotoxin-alpha and beta may account for the observed lack of lymph nodes in birds. As illustrated by these examples, the picture that emerges is some impairment of immune response to viruses in birds, either a cause or consequence of the host-pathogen arms race and long evolutionary relationship of birds and RNA viruses.

  6. Interaction between previous beliefs and cue predictive value in covariation-based causal induction.

    PubMed

    Catena, Andrés; Maldonado, Antonio; Perales, José C; Cándido, Antonio

    2008-06-01

    The main aim of this work was to show the impact of preexisting causal beliefs on causal induction from cause-effect co-occurrence information, when several cues compete with each other for predicting the same effect. Two different causal scenarios -- one social (a), the other medical (b) -- were used to check the generality of the effects. In Experiments 1a and 1b, participants were provided information on the co-occurrence of a two-cause compound and an effect, but not about the potential relationship between each cause by its own and the effect. As expected, prior beliefs -- induced by means of instructions -- strongly modulated the causal strength assigned to each element of the compound. In Experiments 2a and 2b, covariation evidence was provided, not only about the predictive value of the two-cause compound, but also about one of the elements of the compound. When this evidence was available, prior beliefs had less impact on judgments, and these were mostly guided by the relative predictive value of the cue. These results demonstrate the involvement of inferential integrative mechanisms in the generation of causal knowledge and show that single covariation detection mechanisms -- either rule-based or associative -- are insufficient to account for human causal judgment. At the same time, the fact that the power of new covariational evidence to change prior beliefs depended on the availability of information on the relative (conditional) predictive value of the target candidate cause suggests that causal knowledge derived from information on causal mechanisms and from covariation probably share a common representational basis.

  7. MISSE 1 and 2 Tray Temperature Measurements

    NASA Technical Reports Server (NTRS)

    Harvey, Gale A.; Kinard, William H.

    2006-01-01

    The Materials International Space Station Experiment (MISSE 1 & 2) was deployed August 10,2001 and retrieved July 30,2005. This experiment is a co-operative endeavor by NASA-LaRC. NASA-GRC, NASA-MSFC, NASA-JSC, the Materials Laboratory at the Air Force Research Laboratory, and the Boeing Phantom Works. The objective of the experiment is to evaluate performance, stability, and long term survivability of materials and components planned for use by NASA and DOD on future LEO, synchronous orbit, and interplanetary space missions. Temperature is an important parameter in the evaluation of space environmental effects on materials. The MISSE 1 & 2 had autonomous temperature data loggers to measure the temperature of each of the four experiment trays. The MISSE tray-temperature data loggers have one external thermistor data channel, and a 12 bit digital converter. The MISSE experiment trays were exposed to the ISS space environment for nearly four times the nominal design lifetime for this experiment. Nevertheless, all of the data loggers provided useful temperature measurements of MISSE. The temperature measurement system has been discussed in a previous paper. This paper presents temperature measurements of MISSE payload experiment carriers (PECs) 1 and 2 experiment trays.

  8. Perfect Phylogeny Problems with Missing Values.

    PubMed

    Kirkpatrick, Bonnie; Stevens, Kristian

    2014-01-01

    The perfect phylogeny problem is of central importance to both evolutionary biology and population genetics. Missing values are a common occurrence in both sequence and genotype data, but they make the problem of finding a perfect phylogeny NPhard even for binary characters. We introduce new and efficient perfect phylogeny algorithms for broad classes of binary and multistate data with missing values. Specifically, we address binary missing data consistent with the rich data hypothesis (RDH) introduced by Halperin and Karp and give an efficient algorithm for enumerating phylogenies. This algorithm is useful for computing the probability of data with missing values under the coalescent model. In addition, we use the partition intersection (PI) graph and chordal graph theory to generalize the RDH to multi-state characters with missing values. For a bounded number of states, we provide a fixed parameter tractable algorithm for the perfect phylogeny problem with missing data. Utilizing the PI graph, we are able to show that under multiple biologically motivated models for character data, our generalized RDH holds with high probability, and we evaluate our results with extensive empirical analysis.

  9. Misconceptions on Missing Data in RAD-seq Phylogenetics with a Deep-scale Example from Flowering Plants.

    PubMed

    Eaton, Deren A R; Spriggs, Elizabeth L; Park, Brian; Donoghue, Michael J

    2016-10-18

    Restriction-site associated DNA (RAD) sequencing and related methods rely on the conservation of enzyme recognition sites to isolate homologous DNA fragments for sequencing, with the consequence that mutations disrupting these sites lead to missing information. There is thus a clear expectation for how missing data should be distributed, with fewer loci recovered between more distantly related samples. This observation has led to a related expectation: that RAD-seq data are insufficiently informative for resolving deeper scale phylogenetic relationships. Here we investigate the relationship between missing information among samples at the tips of a tree and information at edges within it. We re-analyze and review the distribution of missing data across ten RAD-seq data sets and carry out simulations to determine expected patterns of missing information. We also present new empirical results for the angiosperm clade Viburnum (Adoxaceae, with a crown age >50 Ma) for which we examine phylogenetic information at different depths in the tree and with varied sequencing effort. The total number of loci, the proportion that are shared, and phylogenetic informativeness varied dramatically across the examined RAD-seq data sets. Insufficient or uneven sequencing coverage accounted for similar proportions of missing data as dropout from mutation-disruption. Simulations reveal that mutation-disruption, which results in phylogenetically distributed missing data, can be distinguished from the more stochastic patterns of missing data caused by low sequencing coverage. In Viburnum, doubling sequencing coverage nearly doubled the number of parsimony informative sites, and increased by >10X the number of loci with data shared across >40 taxa. Our analysis leads to a set of practical recommendations for maximizing phylogenetic information in RAD-seq studies. [hierarchical redundancy; phylogenetic informativeness; quartet informativeness; Restriction-site associated DNA (RAD

  10. Inflation in general covariant theory of gravity

    NASA Astrophysics Data System (ADS)

    Huang, Yongqing; Wang, Anzhong; Wu, Qiang

    2012-10-01

    In this paper, we study inflation in the framework of the nonrelativistic general covariant theory of the Hořava-Lifshitz gravity with the projectability condition and an arbitrary coupling constant λ. We find that the Friedmann-Robterson-Walker (FRW) universe is necessarily flat in such a setup. We work out explicitly the linear perturbations of the flat FRW universe without specifying to a particular gauge, and find that the perturbations are different from those obtained in general relativity, because of the presence of the high-order spatial derivative terms. Applying the general formulas to a single scalar field, we show that in the sub-horizon regions, the metric and scalar field are tightly coupled and have the same oscillating frequencies. In the super-horizon regions, the perturbations become adiabatic, and the comoving curvature perturbation is constant. We also calculate the power spectra and indices of both the scalar and tensor perturbations, and express them explicitly in terms of the slow roll parameters and the coupling constants of the high-order spatial derivative terms. In particular, we find that the perturbations, of both scalar and tensor, are almost scale-invariant, and, with some reasonable assumptions on the coupling coefficients, the spectrum index of the tensor perturbation is the same as that given in the minimum scenario in general relativity (GR), whereas the index for scalar perturbation in general depends on λ and is different from the standard GR value. The ratio of the scalar and tensor power spectra depends on the high-order spatial derivative terms, and can be different from that of GR significantly.

  11. Schwinger mechanism in linear covariant gauges

    NASA Astrophysics Data System (ADS)

    Aguilar, A. C.; Binosi, D.; Papavassiliou, J.

    2017-02-01

    In this work we explore the applicability of a special gluon mass generating mechanism in the context of the linear covariant gauges. In particular, the implementation of the Schwinger mechanism in pure Yang-Mills theories hinges crucially on the inclusion of massless bound-state excitations in the fundamental nonperturbative vertices of the theory. The dynamical formation of such excitations is controlled by a homogeneous linear Bethe-Salpeter equation, whose nontrivial solutions have been studied only in the Landau gauge. Here, the form of this integral equation is derived for general values of the gauge-fixing parameter, under a number of simplifying assumptions that reduce the degree of technical complexity. The kernel of this equation consists of fully dressed gluon propagators, for which recent lattice data are used as input, and of three-gluon vertices dressed by a single form factor, which is modeled by means of certain physically motivated Ansätze. The gauge-dependent terms contributing to this kernel impose considerable restrictions on the infrared behavior of the vertex form factor; specifically, only infrared finite Ansätze are compatible with the existence of nontrivial solutions. When such Ansätze are employed, the numerical study of the integral equation reveals a continuity in the type of solutions as one varies the gauge-fixing parameter, indicating a smooth departure from the Landau gauge. Instead, the logarithmically divergent form factor displaying the characteristic "zero crossing," while perfectly consistent in the Landau gauge, has to undergo a dramatic qualitative transformation away from it, in order to yield acceptable solutions. The possible implications of these results are briefly discussed.

  12. Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins

    NASA Astrophysics Data System (ADS)

    Tolwinski-Ward, S. E.; Wang, D.

    2015-12-01

    Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.

  13. Action recognition from video using feature covariance matrices.

    PubMed

    Guo, Kai; Ishwar, Prakash; Konrad, Janusz

    2013-06-01

    We propose a general framework for fast and accurate recognition of actions in video using empirical covariance matrices of features. A dense set of spatio-temporal feature vectors are computed from video to provide a localized description of the action, and subsequently aggregated in an empirical covariance matrix to compactly represent the action. Two supervised learning methods for action recognition are developed using feature covariance matrices. Common to both methods is the transformation of the classification problem in the closed convex cone of covariance matrices into an equivalent problem in the vector space of symmetric matrices via the matrix logarithm. The first method applies nearest-neighbor classification using a suitable Riemannian metric for covariance matrices. The second method approximates the logarithm of a query covariance matrix by a sparse linear combination of the logarithms of training covariance matrices. The action label is then determined from the sparse coefficients. Both methods achieve state-of-the-art classification performance on several datasets, and are robust to action variability, viewpoint changes, and low object resolution. The proposed framework is conceptually simple and has low storage and computational requirements making it attractive for real-time implementation.

  14. Identifying significant covariates for anti-HIV treatment response: mechanism-based differential equation models and empirical semiparametric regression models.

    PubMed

    Huang, Yangxin; Liang, Hua; Wu, Hulin

    2008-10-15

    In this paper, the mechanism-based ordinary differential equation (ODE) model and the flexible semiparametric regression model are employed to identify the significant covariates for antiretroviral response in AIDS clinical trials. We consider the treatment effect as a function of three factors (or covariates) including pharmacokinetics, drug adherence and susceptibility. Both clinical and simulated data examples are given to illustrate these two different kinds of modeling approaches. We found that the ODE model is more powerful to model the mechanism-based nonlinear relationship between treatment effects and virological response biomarkers. The ODE model is also better in identifying the significant factors for virological response, although it is slightly liberal and there is a trend to include more factors (or covariates) in the model. The semiparametric mixed-effects regression model is very flexible to fit the virological response data, but it is too liberal to identify correct factors for the virological response; sometimes it may miss the correct factors. The ODE model is also biologically justifiable and good for predictions and simulations for various biological scenarios. The limitations of the ODE models include the high cost of computation and the requirement of biological assumptions that sometimes may not be easy to validate. The methodologies reviewed in this paper are also generally applicable to studies of other viruses such as hepatitis B virus or hepatitis C virus.

  15. Hawking radiation, covariant boundary conditions, and vacuum states

    SciTech Connect

    Banerjee, Rabin; Kulkarni, Shailesh

    2009-04-15

    The basic characteristics of the covariant chiral current and the covariant chiral energy-momentum tensor are obtained from a chiral effective action. These results are used to justify the covariant boundary condition used in recent approaches of computing the Hawking flux from chiral gauge and gravitational anomalies. We also discuss a connection of our results with the conventional calculation of nonchiral currents and stress tensors in different (Unruh, Hartle-Hawking and Boulware) states.

  16. Covariance Generation Using CONRAD and SAMMY Computer Codes

    SciTech Connect

    Leal, Luiz C; Derrien, Herve; De Saint Jean, C; Noguere, G; Ruggieri, J M

    2009-01-01

    Covariance generation in the resolved resonance region can be generated using the computer codes CONRAD and SAMMY. These codes use formalisms derived from the R-matrix methodology together with the generalized least squares technique to obtain resonance parameter. In addition, resonance parameter covariance is also obtained. Results of covariance calculations for a simple case of the s-wave resonance parameters of 48Ti in the energy region 10-5 eV to 300 keV are compared. The retroactive approach included in CONRAD and SAMMY was used.

  17. Reverse attenuation in interaction terms due to covariate measurement error.

    PubMed

    Muff, Stefanie; Keller, Lukas F

    2015-11-01

    Covariate measurement error may cause biases in parameters of regression coefficients in generalized linear models. The influence of measurement error on interaction parameters has, however, only rarely been investigated in depth, and if so, attenuation effects were reported. In this paper, we show that also reverse attenuation of interaction effects may emerge, namely when heteroscedastic measurement error or sampling variances of a mismeasured covariate are present, which are not unrealistic scenarios in practice. Theoretical findings are illustrated with simulations. A Bayesian approach employing integrated nested Laplace approximations is suggested to model the heteroscedastic measurement error and covariate variances, and an application shows that the method is able to reveal approximately correct parameter estimates.

  18. Covariance matrices and applications to the field of nuclear data

    SciTech Connect

    Smith, D.L.

    1981-11-01

    A student's introduction to covariance error analysis and least-squares evaluation of data is provided. It is shown that the basic formulas used in error propagation can be derived from a consideration of the geometry of curvilinear coordinates. Procedures for deriving covariances for scaler and vector functions of several variables are presented. Proper methods for reporting experimental errors and for deriving covariance matrices from these errors are indicated. The generalized least-squares method for evaluating experimental data is described. Finally, the use of least-squares techniques in data fitting applications is discussed. Specific examples of the various procedures are presented to clarify the concepts.

  19. The importance of covariance in nuclear data uncertainty propagation studies

    SciTech Connect

    Benstead, J.

    2012-07-01

    A study has been undertaken to investigate what proportion of the uncertainty propagated through plutonium critical assembly calculations is due to the covariances between the fission cross section in different neutron energy groups. The uncertainties on k{sub eff} calculated show that the presence of covariances between the cross section in different neutron energy groups accounts for approximately 27-37% of the propagated uncertainty due to the plutonium fission cross section. This study also confirmed the validity of employing the sandwich equation, with associated sensitivity and covariance data, instead of a Monte Carlo sampling approach to calculating uncertainties for linearly varying systems. (authors)

  20. Reconstruction of missing daily streamflow data using dynamic regression models

    NASA Astrophysics Data System (ADS)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.