Science.gov

Sample records for missing covariate information

  1. Sequential BART for imputation of missing covariates.

    PubMed

    Xu, Dandan; Daniels, Michael J; Winterstein, Almut G

    2016-07-01

    To conduct comparative effectiveness research using electronic health records (EHR), many covariates are typically needed to adjust for selection and confounding biases. Unfortunately, it is typical to have missingness in these covariates. Just using cases with complete covariates will result in considerable efficiency losses and likely bias. Here, we consider the covariates missing at random with missing data mechanism either depending on the response or not. Standard methods for multiple imputation can either fail to capture nonlinear relationships or suffer from the incompatibility and uncongeniality issues. We explore a flexible Bayesian nonparametric approach to impute the missing covariates, which involves factoring the joint distribution of the covariates with missingness into a set of sequential conditionals and applying Bayesian additive regression trees to model each of these univariate conditionals. Using data augmentation, the posterior for each conditional can be sampled simultaneously. We provide details on the computational algorithm and make comparisons to other methods, including parametric sequential imputation and two versions of multiple imputation by chained equations. We illustrate the proposed approach on EHR data from an affiliated tertiary care institution to examine factors related to hyperglycemia. PMID:26980459

  2. Estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers.

    PubMed

    Li, Shanshan; Ning, Yang

    2015-09-01

    Covariate-specific time-dependent ROC curves are often used to evaluate the diagnostic accuracy of a biomarker with time-to-event outcomes, when certain covariates have an impact on the test accuracy. In many medical studies, measurements of biomarkers are subject to missingness due to high cost or limitation of technology. This article considers estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers. To incorporate the covariate effect, we assume a proportional hazards model for the failure time given the biomarker and the covariates, and a semiparametric location model for the biomarker given the covariates. In the presence of missing biomarkers, we propose a simple weighted estimator for the ROC curves where the weights are inversely proportional to the selection probability. We also propose an augmented weighted estimator which utilizes information from the subjects with missing biomarkers. The augmented weighted estimator enjoys the double-robustness property in the sense that the estimator remains consistent if either the missing data process or the conditional distribution of the missing data given the observed data is correctly specified. We derive the large sample properties of the proposed estimators and evaluate their finite sample performance using numerical studies. The proposed approaches are illustrated using the US Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. PMID:25891918

  3. A Semiparametric Missing-Data-Induced Intensity Method for Missing Covariate Data in Individually Matched Case–Control Studies

    PubMed Central

    Gebregziabher, Mulugeta; Langholz, Bryan

    2010-01-01

    Summary In individually matched case–control studies, when some covariates are incomplete, an analysis based on the complete data may result in a large loss of information both in the missing and completely observed variables. This usually results in a bias and loss of efficiency. In this article, we propose a new method for handling the problem of missing covariate data based on a missing-data-induced intensity approach when the missingness mechanism does not depend on case–control status and show that this leads to a generalization of the missing indicator method. We derive the asymptotic properties of the estimates from the proposed method and, using an extensive simulation study, assess the finite sample performance in terms of bias, efficiency, and 95% confidence coverage under several missing data scenarios. We also make comparisons with complete-case analysis (CCA) and some missing data methods that have been proposed previously. Our results indicate that, under the assumption of predictable missingness, the suggested method provides valid estimation of parameters, is more efficient than CCA, and is competitive with other, more complex methods of analysis. A case–control study of multiple myeloma risk and a polymorphism in the receptor Inter-Leukin-6 (IL-6-α) is used to illustrate our findings. PMID:19751251

  4. Diagnostic Measures for the Cox Regression Model with Missing Covariates

    PubMed Central

    Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Ming-Hui

    2015-01-01

    Summary This paper investigates diagnostic measures for assessing the influence of observations and model misspecification in the presence of missing covariate data for the Cox regression model. Our diagnostics include case-deletion measures, conditional martingale residuals, and score residuals. The Q-distance is proposed to examine the effects of deleting individual observations on the estimates of finite-dimensional and infinite-dimensional parameters. Conditional martingale residuals are used to construct goodness of fit statistics for testing possible misspecification of the model assumptions. A resampling method is developed to approximate the p-values of the goodness of fit statistics. Simulation studies are conducted to evaluate our methods, and a real data set is analyzed to illustrate their use. PMID:26903666

  5. Multiple imputation for IPD meta-analysis: allowing for heterogeneity and studies with missing covariates.

    PubMed

    Quartagno, M; Carpenter, J R

    2016-07-30

    Recently, multiple imputation has been proposed as a tool for individual patient data meta-analysis with sporadically missing observations, and it has been suggested that within-study imputation is usually preferable. However, such within study imputation cannot handle variables that are completely missing within studies. Further, if some of the contributing studies are relatively small, it may be appropriate to share information across studies when imputing. In this paper, we develop and evaluate a joint modelling approach to multiple imputation of individual patient data in meta-analysis, with an across-study probability distribution for the study specific covariance matrices. This retains the flexibility to allow for between-study heterogeneity when imputing while allowing (i) sharing information on the covariance matrix across studies when this is appropriate, and (ii) imputing variables that are wholly missing from studies. Simulation results show both equivalent performance to the within-study imputation approach where this is valid, and good results in more general, practically relevant, scenarios with studies of very different sizes, non-negligible between-study heterogeneity and wholly missing variables. We illustrate our approach using data from an individual patient data meta-analysis of hypertension trials. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26681666

  6. Comparison of Two Approaches for Handling Missing Covariates in Logistic Regression

    ERIC Educational Resources Information Center

    Peng, Chao-Ying Joanne; Zhu, Jin

    2008-01-01

    For the past 25 years, methodological advances have been made in missing data treatment. Most published work has focused on missing data in dependent variables under various conditions. The present study seeks to fill the void by comparing two approaches for handling missing data in categorical covariates in logistic regression: the…

  7. Music Information Services System (MISS).

    ERIC Educational Resources Information Center

    Rao, Paladugu V.

    Music Information Services System (MISS) was developed at the Eastern Illinois University Library to manage the sound recording collection. Operating in a batch mode, MISS keeps track of the inventory of sound recordings, generates necessary catalogs to facilitate the use of the sound recordings, and provides specialized bibliographies of sound…

  8. A New Approach to Handle Missing Covariate Data in Twin Research : With an Application to Educational Achievement Data.

    PubMed

    Schwabe, Inga; Boomsma, Dorret I; Zeeuw, Eveline L de; Berg, Stéphanie M van den

    2016-07-01

    The often-used ACE model which decomposes phenotypic variance into additive genetic (A), common-environmental (C) and unique-environmental (E) parts can be extended to include covariates. Collection of these variables however often leads to a large amount of missing data, for example when self-reports (e.g. questionnaires) are not fully completed. The usual approach to handle missing covariate data in twin research results in reduced power to detect statistical effects, as only phenotypic and covariate data of individual twins with complete data can be used. Here we present a full information approach to handle missing covariate data that makes it possible to use all available data. A simulation study shows that, independent of missingness scenario, number of covariates or amount of missingness, the full information approach is more powerful than the usual approach. To illustrate the new method, we applied it to test scores on a Dutch national school achievement test (Eindtoets Basisonderwijs) in the final grade of primary school of 990 twin pairs. The effects of school-aggregated measures (e.g. school denomination, pedagogical philosophy, school size) and the effect of the sex of a twin on these test scores were tested. None of the covariates had a significant effect on individual differences in test scores. PMID:26687147

  9. Clustered data analysis under miscategorized ordinal outcomes and missing covariates.

    PubMed

    Roy, Surupa; Rana, Subrata; Das, Kalyan

    2016-08-15

    The primary objective in this article is to look into the analysis of clustered ordinal model where complete information on one or more covariates cease to occur. In addition, we also focus on the analysis of miscategorized data that occur in many situations as outcomes are often classified into a category that does not truly reflect its actual state. A general model structure is assumed to accommodate the information that is obtained via surrogate variables. The theoretical motivation actually developed while encountering an orthodontic data to investigate the effects of age, sex and food habit on the extent of plaque deposit. The model we propose is quite flexible and is capable of tackling those additional noises like miscategorization and missingness, which occur in the data most frequently. A new two-step approach has been proposed to estimate the parameters of model framed. A rigorous simulation study has also been carried out to justify the validity of the model taken up for analysis. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26215983

  10. A Bayesian proportional hazards regression model with non-ignorably missing time-varying covariates

    PubMed Central

    Bradshaw, Patrick T.; Ibrahim, Joseph G.; Gammon, Marilie D.

    2010-01-01

    Missing covariate data is common in observational studies of time to an event, especially when covariates are repeatedly measured over time. Failure to account for the missing data can lead to bias or loss of efficiency, especially when the data are non-ignorably missing. Previous work has focused on the case of fixed covariates rather than those that are repeatedly measured over the follow-up period, so here we present a selection model that allows for proportional hazards regression with time-varying covariates when some covariates may be non-ignorably missing. We develop a fully Bayesian model and obtain posterior estimates of the parameters via the Gibbs sampler in WinBUGS. We illustrate our model with an analysis of post-diagnosis weight change and survival after breast cancer diagnosis in the Long Island Breast Cancer Study Project (LIBCSP) follow-up study. Our results indicate that post-diagnosis weight gain is associated with lower all-cause and breast cancer specific survival among women diagnosed with new primary breast cancer. Our sensitivity analysis showed only slight differences between models with different assumptions on the missing data mechanism yet the complete case analysis yielded markedly different results. PMID:20960582

  11. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…

  12. Multiple imputation of missing covariate values in multilevel models with random slopes: a cautionary note.

    PubMed

    Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander

    2016-06-01

    Multiple imputation (MI) has become one of the main procedures used to treat missing data, but the guidelines from the methodological literature are not easily transferred to multilevel research. For models including random slopes, proper MI can be difficult, especially when the covariate values are partially missing. In the present article, we discuss applications of MI in multilevel random-coefficient models, theoretical challenges posed by slope variation, and the current limitations of standard MI software. Our findings from three simulation studies suggest that (a) MI is able to recover most parameters, but is currently not well suited to capture slope variation entirely when covariate values are missing; (b) MI offers reasonable estimates for most parameters, even in smaller samples or when its assumptions are not met; and PMID:25939979

  13. Imputation of missing covariate values in epigenome-wide analysis of DNA methylation data

    PubMed Central

    Wu, Chong; Demerath, Ellen W.; Pankow, James S.; Bressler, Jan; Fornage, Myriam; Grove, Megan L.; Chen, Wei; Guan, Weihua

    2016-01-01

    ABSTRACT DNA methylation is a widely studied epigenetic mechanism and alterations in methylation patterns may be involved in the development of common diseases. Unlike inherited changes in genetic sequence, variation in site-specific methylation varies by tissue, developmental stage, and disease status, and may be impacted by aging and exposure to environmental factors, such as diet or smoking. These non-genetic factors are typically included in epigenome-wide association studies (EWAS) because they may be confounding factors to the association between methylation and disease. However, missing values in these variables can lead to reduced sample size and decrease the statistical power of EWAS. We propose a site selection and multiple imputation (MI) method to impute missing covariate values and to perform association tests in EWAS. Then, we compare this method to an alternative projection-based method. Through simulations, we show that the MI-based method is slightly conservative, but provides consistent estimates for effect size. We also illustrate these methods with data from the Atherosclerosis Risk in Communities (ARIC) study to carry out an EWAS between methylation levels and smoking status, in which missing cell type compositions and white blood cell counts are imputed. PMID:26890800

  14. Bayesian semiparametric nonlinear mixed-effects joint models for data with skewness, missing responses, and measurement errors in covariates.

    PubMed

    Huang, Yangxin; Dagne, Getachew

    2012-09-01

    It is a common practice to analyze complex longitudinal data using semiparametric nonlinear mixed-effects (SNLME) models with a normal distribution. Normality assumption of model errors may unrealistically obscure important features of subject variations. To partially explain between- and within-subject variations, covariates are usually introduced in such models, but some covariates may often be measured with substantial errors. Moreover, the responses may be missing and the missingness may be nonignorable. Inferential procedures can be complicated dramatically when data with skewness, missing values, and measurement error are observed. In the literature, there has been considerable interest in accommodating either skewness, incompleteness or covariate measurement error in such models, but there has been relatively little study concerning all three features simultaneously. In this article, our objective is to address the simultaneous impact of skewness, missingness, and covariate measurement error by jointly modeling the response and covariate processes based on a flexible Bayesian SNLME model. The method is illustrated using a real AIDS data set to compare potential models with various scenarios and different distribution specifications. PMID:22150787

  15. Dealing with missing covariates in epidemiologic studies: a comparison between multiple imputation and a full Bayesian approach.

    PubMed

    Erler, Nicole S; Rizopoulos, Dimitris; Rosmalen, Joost van; Jaddoe, Vincent W V; Franco, Oscar H; Lesaffre, Emmanuel M E H

    2016-07-30

    Incomplete data are generally a challenge to the analysis of most large studies. The current gold standard to account for missing data is multiple imputation, and more specifically multiple imputation with chained equations (MICE). Numerous studies have been conducted to illustrate the performance of MICE for missing covariate data. The results show that the method works well in various situations. However, less is known about its performance in more complex models, specifically when the outcome is multivariate as in longitudinal studies. In current practice, the multivariate nature of the longitudinal outcome is often neglected in the imputation procedure, or only the baseline outcome is used to impute missing covariates. In this work, we evaluate the performance of MICE using different strategies to include a longitudinal outcome into the imputation models and compare it with a fully Bayesian approach that jointly imputes missing values and estimates the parameters of the longitudinal model. Results from simulation and a real data example show that MICE requires the analyst to correctly specify which components of the longitudinal process need to be included in the imputation models in order to obtain unbiased results. The full Bayesian approach, on the other hand, does not require the analyst to explicitly specify how the longitudinal outcome enters the imputation models. It performed well under different scenarios. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27042954

  16. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 3 2012-04-01 2012-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  17. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  18. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  19. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  20. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 3 2011-04-01 2011-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  1. Jointly Modeling Event Time and Skewed-Longitudinal Data with Missing Response and Mismeasured Covariate for AIDS Studies.

    PubMed

    Huang, Yangxin; Yan, Chunning; Xing, Dongyuan; Zhang, Nanhua; Chen, Henian

    2015-01-01

    In longitudinal studies it is often of interest to investigate how a repeatedly measured marker in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time-to-event data. Normality of model errors in longitudinal model is a routine assumption, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain between- and within-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. Moreover, the responses may encounter nonignorable missing. Statistical analysis may be complicated dramatically based on longitudinal-survival joint models where longitudinal data with skewness, missing values, and measurement errors are observed. In this article, we relax the distributional assumptions for the longitudinal models using skewed (parametric) distribution and unspecified (nonparametric) distribution placed by a Dirichlet process prior, and address the simultaneous influence of skewness, missingness, covariate measurement error, and time-to-event process by jointly modeling three components (response process with missing values, covariate process with measurement errors, and time-to-event process) linked through the random-effects that characterize the underlying individual-specific longitudinal processes in Bayesian analysis. The method is illustrated with an AIDS study by jointly modeling HIV/CD4 dynamics and time to viral rebound in comparison with potential models with various scenarios and different distributional specifications. PMID:24905593

  2. Multiple imputation of missing covariates with non-linear effects and interactions: an evaluation of statistical methods

    PubMed Central

    2012-01-01

    Background Multiple imputation is often used for missing data. When a model contains as covariates more than one function of a variable, it is not obvious how best to impute missing values in these covariates. Consider a regression with outcome Y and covariates X and X2. In 'passive imputation' a value X* is imputed for X and then X2 is imputed as (X*)2. A recent proposal is to treat X2 as 'just another variable' (JAV) and impute X and X2 under multivariate normality. Methods We use simulation to investigate the performance of three methods that can easily be implemented in standard software: 1) linear regression of X on Y to impute X then passive imputation of X2; 2) the same regression but with predictive mean matching (PMM); and 3) JAV. We also investigate the performance of analogous methods when the analysis involves an interaction, and study the theoretical properties of JAV. The application of the methods when complete or incomplete confounders are also present is illustrated using data from the EPIC Study. Results JAV gives consistent estimation when the analysis is linear regression with a quadratic or interaction term and X is missing completely at random. When X is missing at random, JAV may be biased, but this bias is generally less than for passive imputation and PMM. Coverage for JAV was usually good when bias was small. However, in some scenarios with a more pronounced quadratic effect, bias was large and coverage poor. When the analysis was logistic regression, JAV's performance was sometimes very poor. PMM generally improved on passive imputation, in terms of bias and coverage, but did not eliminate the bias. Conclusions Given the current state of available software, JAV is the best of a set of imperfect imputation methods for linear regression with a quadratic or interaction effect, but should not be used for logistic regression. PMID:22489953

  3. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249

  4. MISSE in the Materials and Processes Technical Information System (MAPTIS )

    NASA Technical Reports Server (NTRS)

    Burns, DeWitt; Finckenor, Miria; Henrie, Ben

    2013-01-01

    Materials International Space Station Experiment (MISSE) data is now being collected and distributed through the Materials and Processes Technical Information System (MAPTIS) at Marshall Space Flight Center in Huntsville, Alabama. MISSE data has been instrumental in many programs and continues to be an important source of data for the space community. To facilitate great access to the MISSE data the International Space Station (ISS) program office and MAPTIS are working to gather this data into a central location. The MISSE database contains information about materials, samples, and flights along with pictures, pdfs, excel files, word documents, and other files types. Major capabilities of the system are: access control, browsing, searching, reports, and record comparison. The search capabilities will search within any searchable files so even if the desired meta-data has not been associated data can still be retrieved. Other functionality will continue to be added to the MISSE database as the Athena Platform is expanded

  5. Information Gaps: The Missing Links to Learning.

    ERIC Educational Resources Information Center

    Adams, Carl R.

    Communication takes place when a speaker conveys new information to the listener. In second language teaching, information gaps motivate students to use and learn the target language in order to obtain information. The resulting interactive language use may develop affective bonds among the students. A variety of classroom techniques are available…

  6. Funding information technology: a missed market.

    PubMed

    Rux, P

    1998-10-01

    Information technology is driving business and industry into the future. This is the essence of reengineering, process innovation, downsizing, etc. Non-profits, schools, libraries, etc. need to follow or risk efficiency. However, to get their fair share of information technology, they need to help with funding. PMID:10187237

  7. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  8. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...

  9. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...

  10. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...

  11. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...

  12. 38 CFR 1.705 - Restrictions on use of missing children information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...

  13. Informed conditioning on clinical covariates increases power in case-control association studies.

    PubMed

    Zaitlen, Noah; Lindström, Sara; Pasaniuc, Bogdan; Cornelis, Marilyn; Genovese, Giulio; Pollack, Samuela; Barton, Anne; Bickeböller, Heike; Bowden, Donald W; Eyre, Steve; Freedman, Barry I; Friedman, David J; Field, John K; Groop, Leif; Haugen, Aage; Heinrich, Joachim; Henderson, Brian E; Hicks, Pamela J; Hocking, Lynne J; Kolonel, Laurence N; Landi, Maria Teresa; Langefeld, Carl D; Le Marchand, Loic; Meister, Michael; Morgan, Ann W; Raji, Olaide Y; Risch, Angela; Rosenberger, Albert; Scherf, David; Steer, Sophia; Walshaw, Martin; Waters, Kevin M; Wilson, Anthony G; Wordsworth, Paul; Zienolddiny, Shanbeh; Tchetgen, Eric Tchetgen; Haiman, Christopher; Hunter, David J; Plenge, Robert M; Worthington, Jane; Christiani, David C; Schaumberg, Debra A; Chasman, Daniel I; Altshuler, David; Voight, Benjamin; Kraft, Peter; Patterson, Nick; Price, Alkes L

    2012-01-01

    Genetic case-control association studies often include data on clinical covariates, such as body mass index (BMI), smoking status, or age, that may modify the underlying genetic risk of case or control samples. For example, in type 2 diabetes, odds ratios for established variants estimated from low-BMI cases are larger than those estimated from high-BMI cases. An unanswered question is how to use this information to maximize statistical power in case-control studies that ascertain individuals on the basis of phenotype (case-control ascertainment) or phenotype and clinical covariates (case-control-covariate ascertainment). While current approaches improve power in studies with random ascertainment, they often lose power under case-control ascertainment and fail to capture available power increases under case-control-covariate ascertainment. We show that an informed conditioning approach, based on the liability threshold model with parameters informed by external epidemiological information, fully accounts for disease prevalence and non-random ascertainment of phenotype as well as covariates and provides a substantial increase in power while maintaining a properly controlled false-positive rate. Our method outperforms standard case-control association tests with or without covariates, tests of gene x covariate interaction, and previously proposed tests for dealing with covariates in ascertained data, with especially large improvements in the case of case-control-covariate ascertainment. We investigate empirical case-control studies of type 2 diabetes, prostate cancer, lung cancer, breast cancer, rheumatoid arthritis, age-related macular degeneration, and end-stage kidney disease over a total of 89,726 samples. In these datasets, informed conditioning outperforms logistic regression for 115 of the 157 known associated variants investigated (P-value = 1 × 10(-9)). The improvement varied across diseases with a 16% median increase in χ(2) test statistics and a

  14. Estimating Missing Features to Improve Multimedia Information Retrieval

    SciTech Connect

    Bagherjeiran, A; Love, N S; Kamath, C

    2006-09-28

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.

  15. Electronic pharmacopoeia: a missed opportunity for safe opioid prescribing information?

    PubMed

    Lapoint, Jeff; Perrone, Jeanmarie; Nelson, Lewis S

    2014-03-01

    Errors in prescribing of dangerous medications, such as extended release or long acting (ER/LA) opioid forlmulations, remain an important cause of patient harm. Prescribing errors often relate to the failure to note warnings regarding contraindications and drug interactions. Many prescribers utilize electronic pharmacopoeia (EP) to improve medication ordering. The purpose of this study is to assess the ability of commonly used apps to provide accurate safety information about the boxed warning for ER/LA opioids. We evaluated a convenience sample of six popular EP apps available for the iPhone and an online reference for the presence of relevant safety warnings. We accessed the dosing information for each of six ER/LA medications and assessed for the presence of an easily identifiable indication that a boxed warning was present, even if the warning itself was not provided. The prominence of precautionary drug information presented to the user was assessed for each app. Provided information was classified based on the presence of the warning in the ordering pathway, located separately but within the prescribers view, or available in a separate screen of the drug information but non-highlighted. Each program provided a consistent level of warning information for each of the six ER/LA medications. Only 2/7 programs placed a warning in line with dosing information (level 1); 3/7 programs offered level 2 warning and 1/7 offered level 3 warning. One program made no mention of a boxed warning. Most EP apps isolate important safety warnings, and this represents a missed opportunity to improve prescribing practices. PMID:24081616

  16. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation

    PubMed Central

    Mazza, Gina L.; Enders, Craig K.; Ruehlman, Linda S.

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002; Graham, 2009; Enders, 2010). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe an FIML approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249

  17. Slope Estimation of Covariates that Influence Renal Outcome following Renal Transplant Adjusting for Informative Right Censoring

    PubMed Central

    Jaffa, Miran A.; Jaffa, Ayad A; Lipsitz, Stuart R.

    2015-01-01

    A new statistical model is proposed to estimate population and individual slopes that are adjusted for covariates and informative right censoring. Individual slopes are assumed to have a mean that depends on the population slope for the covariates. The number of observations for each individual is modeled as a truncated discrete distribution with mean dependent on the individual subjects' slopes. Our simulation study results indicated that the associated bias and mean squared errors for the proposed model were comparable to those associated with the model that only adjusts for informative right censoring. The proposed model was illustrated using renal transplant dataset to estimate population slopes for covariates that could impact the outcome of renal function following renal transplantation. PMID:25729124

  18. Multidimensional mutual information methods for the analysis of covariation in multiple sequence alignments

    PubMed Central

    2014-01-01

    Background Several methods are available for the detection of covarying positions from a multiple sequence alignment (MSA). If the MSA contains a large number of sequences, information about the proximities between residues derived from covariation maps can be sufficient to predict a protein fold. However, in many cases the structure is already known, and information on the covarying positions can be valuable to understand the protein mechanism and dynamic properties. Results In this study we have sought to determine whether a multivariate (multidimensional) extension of traditional mutual information (MI) can be an additional tool to study covariation. The performance of two multidimensional MI (mdMI) methods, designed to remove the effect of ternary/quaternary interdependencies, was tested with a set of 9 MSAs each containing <400 sequences, and was shown to be comparable to that of the newest methods based on maximum entropy/pseudolikelyhood statistical models of protein sequences. However, while all the methods tested detected a similar number of covarying pairs among the residues separated by < 8 Å in the reference X-ray structures, there was on average less than 65% overlap between the top scoring pairs detected by methods that are based on different principles. Conclusions Given the large variety of structure and evolutionary history of different proteins it is possible that a single best method to detect covariation in all proteins does not exist, and that for each protein family the best information can be derived by merging/comparing results obtained with different methods. This approach may be particularly valuable in those cases in which the size of the MSA is small or the quality of the alignment is low, leading to significant differences in the pairs detected by different methods. PMID:24886131

  19. Filtering of monthly GRACE gravity field solutions using time variable decorrelation by incorporating full covariance information

    NASA Astrophysics Data System (ADS)

    Horvath, Alexander; Murböck, Michael; Pail, Roland; Horwath, Martin

    2016-04-01

    Aiming for an as accurate as possible estimation of mass trends in Antarctica or other regions, based on global GRACE gravity field solutions, calls for best possible post processing strategies. Decorrelation filters employing static covariance information have already been developed in the past (e.g. DDK filter series by Jürgen Kusche 2007 & 2009), but covariance information for a decade long recent time series was (except for the ITG-GRACE2010 series) not publicly available since the publication of the ITSG temporal gravity field model in October 2014. With this work we aim to use this time series with its evolving correlation structures due to changing mission configuration (e.g. orbital height) and instrument characteristics over time. Proper reduction of correlated errors is a crucial step towards trend estimation. For this purpose we analyzed the existing series of DDK filters based on static or simplified assumptions on the correlation structure of spherical harmonic coefficients and target signals. To analyze the potential gain using month to month full covariance information we have tested the impact of certain simplifications (e.g. the ones applied for the DDK filters) with respect to the full covariance information in a closed loop simulator. Based on the outcome of the simulated results we computed new time variable decorrelation (VADER) filters using full error covariance information and investigated the impact on basin scale mass change estimates in the Antarctic region. The work presented includes a comprehensive assessment of the filter performance, accompanied by an intercomparison of the mass change estimates based on the VADER filter solutions against the ones obtained from DDK, Swenson & Wahr type and other filters as well as independently derived results from e.g. radar altimetry.

  20. Operation reliability assessment for cutting tools by applying a proportional covariate model to condition monitoring information.

    PubMed

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  1. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  2. Three schemes of remote information concentration based on ancilla-free phase-covariant telecloning

    NASA Astrophysics Data System (ADS)

    Bai, Ming-qiang; Peng, Jia-Yin; Mo, Zhi-Wen

    2014-05-01

    In this paper, remote information concentration is investigated which is the reverse process of the optimal asymmetric economical phase-covariant telecloning (OAEPCT). The OAEPCT is different from the reverse process of optimal universal telecloning. It is shown that the quantum information via OAEPCT procedure can be remotely concentrated back to a single qubit with a certain probability via several quantum channels. In these schemes, we adopt Bell measurement to measure the joint systems and use projected measurement and positive operator-valued measure to recover the original quantum state. The results shows non-maximally entangled quantum resource can be applied to information concentration.

  3. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  4. Information Literacy: The Missing Link in Early Childhood Education

    ERIC Educational Resources Information Center

    Heider, Kelly L.

    2009-01-01

    The rapid growth of information over the last 30 or 40 years has made it impossible for educators to prepare students for the future without teaching them how to be effective information managers. The American Library Association refers to those students who manage information effectively as "information literate." Information literacy instruction…

  5. Fraction of Missing Information (γ) at Different Missing Data Fractions in the 2012 NAMCS Physician Workflow Mail Survey*

    PubMed Central

    Pan, Qiyuan; Wei, Rong

    2016-01-01

    In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)−1/2, where m is the number of imputations, leading to the conclusion that a small m (≤5) would be sufficient for MI. However, evidence has been accumulating that many more imputations are needed. Why would the apparently sufficient m deduced from the RE be actually too small? The answer may lie with γ. In this research, γ was determined at the fractions of missing data (δ) of 4%, 10%, 20%, and 29% using the 2012 Physician Workflow Mail Survey of the National Ambulatory Medical Care Survey (NAMCS). The γ values were strikingly small, ranging in the order of 10−6 to 0.01. As δ increased, γ usually increased but sometimes decreased. How the data were analysed had the dominating effects on γ, overshadowing the effect of δ. The results suggest that it is impossible to predict γ using δ and that it may not be appropriate to use the γ-based RE to determine sufficient m.

  6. Open Informational Ecosystems: The Missing Link for Sharing Educational Resources

    ERIC Educational Resources Information Center

    Kerres, Michael; Heinen, Richard

    2015-01-01

    Open educational resources are not available "as such". Their provision relies on a technological infrastructure of related services that can be described as an informational ecosystem. A closed informational ecosystem keeps educational resources within its boundary. An open informational ecosystem relies on the concurrence of…

  7. Exchanging Missing Information in Tasks: Old and New Interpretations

    ERIC Educational Resources Information Center

    Jenks, Christopher Joseph

    2009-01-01

    Information gap tasks have played a key role in applied linguistics (Pica, 2005). For example, extensive research has been conducted using information gap tasks to elicit second language data. Yet, despite their prominent role in research and pedagogy, there is still much to be investigated with regard to what information gap tasks offer research…

  8. Covariances of evaluated nuclear data based upon uncertainty information of experimental data and nuclear models

    SciTech Connect

    Poenitz, W.P.; Peelle, R.W.

    1986-11-17

    A straightforward derivation is presented for the covariance matrix of evaluated cross sections based on the covariance matrix of the experimental data and propagation through nuclear model parameters. 10 refs.

  9. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. PMID:27176912

  10. Modeling Achievement Trajectories when Attrition Is Informative

    ERIC Educational Resources Information Center

    Feldman, Betsy J.; Rabe-Hesketh, Sophia

    2012-01-01

    In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…

  11. Individual Information-Centered Approach for Handling Physical Activity Missing Data

    ERIC Educational Resources Information Center

    Kang, Minsoo; Rowe, David A.; Barreira, Tiago V.; Robinson, Terrance S.; Mahar, Matthew T.

    2009-01-01

    The purpose of this study was to validate individual information (II)-centered methods for handling missing data, using data samples of 118 middle-aged adults and 91 older adults equipped with Yamax SW-200 pedometers and Actigraph accelerometers for 7 days. We used a semisimulation approach to create six data sets: three physical activity outcome…

  12. The Effect of "Missing" Information on Children's Retention of Fast-Mapped Labels

    ERIC Educational Resources Information Center

    Wilkinson, Krista M.; Mazzitelli, Kim

    2003-01-01

    This paper explores "fast mapping", one of several processes that have been proposed to be involved in the rapid vocabulary expansion observed in the preschool years. An adaptation of a receptive word matching task examined how well children retained a just-mapped relation between word and referent when some information was later missing.…

  13. Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David

    2013-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…

  14. The Effect of "Missing" Information on Children's Retention of Fast-Mapped Labels.

    ERIC Educational Resources Information Center

    Wilkinson, Krista M.; Mazzitelli, Kim

    2003-01-01

    Explores "fast mapping," one of several processes that have been proposed to be involved in the rapid vocabulary expansion observed in the preschool years. An adaptation of a receptive word matching task examined how well children retained just-mapped relation between word and referent when some information was later missing. (Author/VWL)

  15. Relying on Your Own Best Judgment: Imputing Values to Missing Information in Decision Making.

    ERIC Educational Resources Information Center

    Johnson, Richard D.; And Others

    Processes involved in making estimates of the value of missing information that could help in a decision making process were studied. Hypothetical purchases of ground beef were selected for the study as such purchases have the desirable property of quantifying both the price and quality. A total of 150 students at the University of Iowa rated the…

  16. Restoration of the missing pixel information caused by contrails in multispectral remotely sensed imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Daxiang; Zhang, Chuanrong; Li, Weidong; Cromley, Robert; Hanink, Dean; Civco, Daniel; Travis, David

    2014-01-01

    Although removing the pixels covered by contrails and their shadows and restoring the missing information at the locations in remotely sensed imagery are important to understand contrails' effects on climate change, there are no such studies in the current literature. This study investigates the restoration of the missing information of the pixels caused by contrails in multispectral remotely sensed Landsat 5 TM imagery using a cokriging approach. Interpolation results and several validation methods show that it is practical to use the cokriging approach to restore the contrail-covered pixels in the multispectral remotely sensed imagery. Compared to ordinary kriging, the results are improved by taking advantage of both the spatial information in the original imagery and information from the secondary imagery.

  17. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  18. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  19. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  20. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  1. ICON: 3D reconstruction with 'missing-information' restoration in biological electron tomography.

    PubMed

    Deng, Yuchen; Chen, Yu; Zhang, Yan; Wang, Shengliu; Zhang, Fa; Sun, Fei

    2016-07-01

    Electron tomography (ET) plays an important role in revealing biological structures, ranging from macromolecular to subcellular scale. Due to limited tilt angles, ET reconstruction always suffers from the 'missing wedge' artifacts, thus severely weakens the further biological interpretation. In this work, we developed an algorithm called Iterative Compressed-sensing Optimized Non-uniform fast Fourier transform reconstruction (ICON) based on the theory of compressed-sensing and the assumption of sparsity of biological specimens. ICON can significantly restore the missing information in comparison with other reconstruction algorithms. More importantly, we used the leave-one-out method to verify the validity of restored information for both simulated and experimental data. The significant improvement in sub-tomogram averaging by ICON indicates its great potential in the future application of high-resolution structural determination of macromolecules in situ. PMID:27079261

  2. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  3. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  4. Probing phase-space noncommutativity through quantum beating, missing information, and the thermodynamic limit

    NASA Astrophysics Data System (ADS)

    Bernardini, A. E.; Bertolami, O.

    2013-07-01

    In this work we examine the effect of phase-space noncommutativity on some typically quantum properties such as quantum beating, quantum information, and decoherence. To exemplify these issues we consider the two-dimensional noncommutative quantum harmonic oscillator whose component behavior we monitor in time. This procedure allows us to determine how the noncommutative parameters are related to the missing information quantified by the linear quantum entropy and by the mutual information between the relevant Hilbert space coordinates. Particular questions concerning the thermodynamic limit of some relevant properties are also discussed in order to evidence the effects of noncommutativity. Finally, through an analogy with the Zeeman effect, we identify how some aspects of the axial symmetry of the problem suggest the possibility of decoupling the noncommutative quantum perturbations from unperturbed commutative well-known solutions.

  5. Miss Heroin.

    ERIC Educational Resources Information Center

    Riley, Bernice

    This script, with music, lyrics and dialog, was written especially for youngsters to inform them of the potential dangers of various drugs. The author, who teaches in an elementary school in Harlem, New York, offers Miss Heroin as her answer to the expressed opinion that most drug and alcohol information available is either too simplified and…

  6. Statistical Inference for Regression Models with Covariate Measurement Error and Auxiliary Information.

    PubMed

    You, Jinhong; Zhou, Haibo

    2009-01-01

    We consider statistical inference on a regression model in which some covariables are measured with errors together with an auxiliary variable. The proposed estimation for the regression coefficients is based on some estimating equations. This new method alleates some drawbacks of previously proposed estimations. This includes the requirment of undersmoothing the regressor functions over the auxiliary variable, the restriction on other covariables which can be observed exactly, among others. The large sample properties of the proposed estimator are established. We further propose a jackknife estimation, which consists of deleting one estimating equation (instead of one obervation) at a time. We show that the jackknife estimator of the regression coefficients and the estimating equations based estimator are asymptotically equivalent. Simulations show that the jackknife estimator has smaller biases when sample size is small or moderate. In addition, the jackknife estimation can also provide a consistent estimator of the asymptotic covariance matrix, which is robust to the heteroscedasticity. We illustrate these methods by applying them to a real data set from marketing science. PMID:22199460

  7. Covariant Transform

    NASA Astrophysics Data System (ADS)

    Kisil, Vladimir V.

    2011-03-01

    Dedicated to the memory of Cora Sadosky The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H2, Banach spaces, covariant functional calculus and many others.

  8. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    PubMed

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion. PMID:25345922

  9. FW: An R Package for Finlay–Wilkinson Regression that Incorporates Genomic/Pedigree Information and Covariance Structures Between Environments

    PubMed Central

    Lian, Lian; de los Campos, Gustavo

    2015-01-01

    The Finlay–Wilkinson regression (FW) is a popular method among plant breeders to describe genotype by environment interaction. The standard implementation is a two-step procedure that uses environment (sample) means as covariates in a within-line ordinary least squares (OLS) regression. This procedure can be suboptimal for at least four reasons: (1) in the first step environmental means are typically estimated without considering genetic-by-environment interactions, (2) in the second step uncertainty about the environmental means is ignored, (3) estimation is performed regarding lines and environment as fixed effects, and (4) the procedure does not incorporate genetic (either pedigree-derived or marker-derived) relationships. Su et al. proposed to address these problems using a Bayesian method that allows simultaneous estimation of environmental and genotype parameters, and allows incorporation of pedigree information. In this article we: (1) extend the model presented by Su et al. to allow integration of genomic information [e.g., single nucleotide polymorphism (SNP)] and covariance between environments, (2) present an R package (FW) that implements these methods, and (3) illustrate the use of the package using examples based on real data. The FW R package implements both the two-step OLS method and a full Bayesian approach for Finlay–Wilkinson regression with a very simple interface. Using a real wheat data set we demonstrate that the prediction accuracy of the Bayesian approach is consistently higher than the one achieved by the two-step OLS method. PMID:26715095

  10. Reconstructing missing information on precipitation datasets: impact of tails on adopted statistical distributions.

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Beckie, Roger Daniel

    2014-05-01

    Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were

  11. [The Hospital Information System of the Brazilian Unified National Health System: a performance evaluation for auditing maternal near miss].

    PubMed

    Nakamura-Pereira, Marcos; Mendes-Silva, Wallace; Dias, Marcos Augusto Bastos; Reichenheim, Michael E; Lobato, Gustavo

    2013-07-01

    This study aimed to investigate the performance of the Hospital Information System of the Brazilian Unified National Health System (SIH-SUS) in identifying cases of maternal near miss in a hospital in Rio de Janeiro, Brazil, in 2008. Cases were identified by reviewing medical records of pregnant and postpartum women admitted to the hospital. The search for potential near miss events in the SIH-SUS database relied on a list of procedures and codes from the International Classification of Diseases, 10th revision (ICD-10) that were consistent with this diagnosis. The patient chart review identified 27 cases, while 70 potential occurrences of near miss were detected in the SIH-SUS database. However, only 5 of 70 were "true cases" of near miss according to the chart review, which corresponds to a sensitivity of 18.5% (95%CI: 6.3-38.1), specificity of 94.3% (95%CI: 92.8-95.6), area under the ROC of 0.56 (95%CI: 0.48-0.63), and positive predictive value of 10.1% (IC95%: 4.7-20.3). These findings suggest that SIH-SUS does not appear appropriate for monitoring maternal near miss. PMID:23843001

  12. Missed bleeding events after ticagrelor in PEGASUS trial: Massive non-compliance, information censoring, or both?

    PubMed

    Serebruany, Victor; Tomek, Ales

    2016-07-15

    PEGASUS trial reported reduction of composite primary endpoint after conventional 180mg/daily ticagrelor (CT), and lower 120mg/daily dose ticagrelor (LT) at expense of extra bleeding. Following approval of CT and LT for long-term secondary prevention indication, recent FDA review verified some bleeding outcomes in PEGASUS. To compare the risks after CT and LT against placebo by seven TIMI scale variables, and 9 bleeding categories considered as serious adverse events (SAE) in light of PEGASUS drug discontinuation rates (DDR). The DDR in all PEGASUS arms was high reaching astronomical 32% for CT. The distribution of some outcomes (TIMI major, trauma, epistaxis, iron deficiency, hemoptysis, and anemia) was reasonable. However, the TIMI minor events were heavily underreported when compared to similar trials. Other bleedings (intracranial, spontaneous, hematuria, and gastrointestinal) appear sporadic, lacking expected dose-dependent impact of CT and LT. Few SAE outcomes (fatal, ecchymosis, hematoma, bruises, bleeding) paradoxically reported more bleeding after LT than after CT. Many bleeding outcomes were probably missed in PEGASUS potentially due to massive non-compliance, information censoring, or both. The FDA must improve reporting of trial outcomes especially in the sponsor-controlled environment when DDR and incomplete follow-up rates are high. PMID:27128533

  13. Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed.

    PubMed

    Ehinger, Krista A; Allen, Kala; Wolfe, Jeremy M

    2016-05-01

    Previous work has shown that human observers discount or neglect cast shadows in natural and artificial scenes across a range of visual tasks. This is a reasonable strategy for a visual system designed to recognize objects under a range of lighting conditions, since cast shadows are not intrinsic properties of the scene-they look different (or disappear entirely) under different lighting conditions. However, cast shadows can convey useful information about the three-dimensional shapes of objects and their spatial relations. In this study, we investigated how well people detect changes to cast shadows, presented in natural scenes in a change blindness paradigm, and whether shadow changes that imply the movement or disappearance of an object are more easily noticed than shadow changes that imply a change in lighting. In Experiment 1, a critical object's shadow was removed, rotated to another direction, or shifted down to suggest that the object was floating. All of these shadow changes were noticed less often than changes to physical objects or surfaces in the scene, and there was no difference in the detection rates for the three types of changes. In Experiment 2, the shadows of visible or occluded objects were removed from the scenes. Although removing the cast shadow of an occluded object could be seen as an object deletion, both types of shadow changes were noticed less often than deletions of the visible, physical objects in the scene. These results show that even informative shadow changes are missed, suggesting that cast shadows are discounted fairly early in the processing of natural scenes. PMID:26846753

  14. Estimating model and observation error covariance information for land data assimilation systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In order to operate efficiently, data assimilation systems require accurate assumptions concerning the statistical magnitude and cross-correlation structure of error in model forecasts and assimilated observations. Such information is seldom available for the operational implementation of land data ...

  15. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  16. Predicting New Hampshire Indoor Radon Concentrations from geologic information and other covariates

    SciTech Connect

    Apte, M.G.; Price, P.N.; Nero, A.V.; Revzan, K.L.

    1998-05-01

    Generalized geologic province information and data on house construction were used to predict indoor radon concentrations in New Hampshire (NH). A mixed-effects regression model was used to predict the geometric mean (GM) short-term radon concentrations in 259 NH towns. Bayesian methods were used to avoid over-fitting and to minimize the effects of small sample variation within towns. Data from a random survey of short-term radon measurements, individual residence building characteristics, along with geologic unit information, and average surface radium concentration by town, were variables used in the model. Predicted town GM short-term indoor radon concentrations for detached houses with usable basements range from 34 Bq/m{sup 3} (1 pCi/l) to 558 Bq/m{sup 3} (15 pCi/l), with uncertainties of about 30%. A geologic province consisting of glacial deposits and marine sediments, was associated with significantly elevated radon levels, after adjustment for radium concentration, and building type. Validation and interpretation of results are discussed.

  17. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. PMID:25707010

  18. An Upper Bound on High Speed Satellite Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.

  19. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  20. Predicting top-L missing links with node and link clustering information in large-scale networks

    NASA Astrophysics Data System (ADS)

    Wu, Zhihao; Lin, Youfang; Wan, Huaiyu; Jamil, Waleed

    2016-08-01

    Networks are mathematical structures that are universally used to describe a large variety of complex systems, such as social, biological, and technological systems. The prediction of missing links in incomplete complex networks aims to estimate the likelihood of the existence of a link between a pair of nodes. Various topological features of networks have been applied to develop link prediction methods. However, the exploration of features of links is still limited. In this paper, we demonstrate the power of node and link clustering information in predicting top -L missing links. In the existing literature, link prediction algorithms have only been tested on small-scale and middle-scale networks. The network scale factor has not attracted the same level of attention. In our experiments, we test the proposed method on three groups of networks. For small-scale networks, since the structures are not very complex, advanced methods cannot perform significantly better than classical methods. For middle-scale networks, the proposed index, combining both node and link clustering information, starts to demonstrate its advantages. In many networks, combining both node and link clustering information can improve the link prediction accuracy a great deal. Large-scale networks with more than 100 000 links have rarely been tested previously. Our experiments on three large-scale networks show that local clustering information based methods outperform other methods, and link clustering information can further improve the accuracy of node clustering information based methods, in particular for networks with a broad distribution of the link clustering coefficient.

  1. Effects of missing low-frequency information on ptychographic and plane-wave coherent diffraction imaging.

    PubMed

    Liu, Haigang; Xu, Zijian; Zhang, Xiangzhi; Wu, Yanqing; Guo, Zhi; Tai, Renzhong

    2013-04-10

    In coherent diffractive imaging (CDI) experiments, a beamstop (BS) is commonly used to extend the exposure time of the charge-coupled detector and obtain high-angle diffraction signals. However, the negative effect of a large BS is also evident, causing low-frequency signals to be missed and making CDI reconstruction unstable or causing it to fail. We performed a systematic simulation investigation of the effects of BSs on the quality of reconstructed images from both plane-wave and ptychographic CDI (PCDI). For the same imaging quality, we found that ptychography can tolerate BSs that are at least 20 times larger than those for plane-wave CDI. For PCDI, a larger overlap ratio and a smaller illumination spot can significantly increase the imaging robustness to the negative influence of BSs. Our results provide guidelines for the usage of BSs in CDI, especially in PCDI experiments, which can help to further improve the spatial resolution of PCDI. PMID:23670772

  2. A fully covariant information-theoretic ultraviolet cutoff for scalar fields in expanding Friedmann Robertson Walker spacetimes

    NASA Astrophysics Data System (ADS)

    Kempf, A.; Chatwin-Davies, A.; Martin, R. T. W.

    2013-02-01

    While a natural ultraviolet cutoff, presumably at the Planck length, is widely assumed to exist in nature, it is nontrivial to implement a minimum length scale covariantly. This is because the presence of a fixed minimum length needs to be reconciled with the ability of Lorentz transformations to contract lengths. In this paper, we implement a fully covariant Planck scale cutoff by cutting off the spectrum of the d'Alembertian. In this scenario, consistent with Lorentz contractions, wavelengths that are arbitrarily smaller than the Planck length continue to exist. However, the dynamics of modes of wavelengths that are significantly smaller than the Planck length possess a very small bandwidth. This has the effect of freezing the dynamics of such modes. While both wavelengths and bandwidths are frame dependent, Lorentz contraction and time dilation conspire to make the freezing of modes of trans-Planckian wavelengths covariant. In particular, we show that this ultraviolet cutoff can be implemented covariantly also in curved spacetimes. We focus on Friedmann Robertson Walker spacetimes and their much-discussed trans-Planckian question: The physical wavelength of each comoving mode was smaller than the Planck scale at sufficiently early times. What was the mode's dynamics then? Here, we show that in the presence of the covariant UV cutoff, the dynamical bandwidth of a comoving mode is essentially zero up until its physical wavelength starts exceeding the Planck length. In particular, we show that under general assumptions, the number of dynamical degrees of freedom of each comoving mode all the way up to some arbitrary finite time is actually finite. Our results also open the way to calculating the impact of this natural UV cutoff on inflationary predictions for the cosmic microwave background.

  3. Working with Missing Values

    ERIC Educational Resources Information Center

    Acock, Alan C.

    2005-01-01

    Less than optimum strategies for missing values can produce biased estimates, distorted statistical power, and invalid conclusions. After reviewing traditional approaches (listwise, pairwise, and mean substitution), selected alternatives are covered including single imputation, multiple imputation, and full information maximum likelihood…

  4. Missing Genetic Information in Case-Control Family Data with General Semi-Parametric Shared Frailty Model

    PubMed Central

    Graber-Naidich, Anna; Malone, Kathleen E.; Hsu, Li

    2011-01-01

    Case-control family data are now widely used to examine the role of gene-environment interactions in the etiology of complex diseases. In these types of studies, exposure levels are obtained retrospectively and, frequently, information on most risk factors of interest is available on the probands but not on their relatives. In this work we consider correlated failure time data arising from population-based case-control family studies with missing genotypes of relatives. We present a new method for estimating the age-dependent marginalized hazard function. The proposed technique has two major advantages: (1) it is based on the pseudo full likelihood function rather than a pseudo composite likelihood function, which usually suffers from substantial efficiency loss; (2) the cumulative baseline hazard function is estimated using a two-stage estimator instead of an iterative process. We assess the performance of the proposed methodology with simulation studies, and illustrate its utility on a real data example. PMID:21153764

  5. Lack of patient-reported outcomes assessment in phase III breast cancer studies: a missed opportunity for informed decision making.

    PubMed

    Blinder, Victoria S

    2014-01-01

    A phase III study comparing capecitabine monotherapy to combination treatment with capecitabine and sunitinib in patients with metastatic breast cancer failed to demonstrate a benefit in terms of progression-free or overall survival. Both regimens were reasonably well tolerated with some differences noted in the specific toxicity profiles. However, the study failed to incorporate an assessment of patient-reported outcomes (PROs) such as self-reported pain, quality of life, or employment outcomes. This is a missed opportunity. If more clinical trials included such measures, they would provide valuable information to patients and clinicians choosing from a wide array of available and otherwise similarly effective systemic therapies for metastatic breast cancer. PMID:25841482

  6. Slide Presentations as Speech Suppressors: When and Why Learners Miss Oral Information

    ERIC Educational Resources Information Center

    Wecker, Christof

    2012-01-01

    The objective of this study was to test whether information presented on slides during presentations is retained at the expense of information presented only orally, and to investigate part of the conditions under which this effect occurs, and how it can be avoided. Such an effect could be expected and explained either as a kind of redundancy…

  7. News and Views: Missing the boat; Wanted: geophysics for teachers; Government seeks geo-engineering information

    NASA Astrophysics Data System (ADS)

    2008-08-01

    The Institute of Physics is seeking short summaries of geophysics topics to support school teachers, in a move aimed at boosting the teaching and awareness of geophysics in schools. The UK government Department of Innovation, Universities, Science and Skills is seeking information on geo-engineering as a case study within its major enquiry into engineering. The field described by the DUISS is broad and covers areas in which geophysicists may be working and in a position to supply useful information.

  8. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 23 Highways 1 2014-04-01 2014-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...

  9. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...

  10. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...

  11. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...

  12. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both. PMID:27198876

  13. Missing Mechanism Information

    ERIC Educational Resources Information Center

    Tryon, Warren W.

    2009-01-01

    The first recommendation Kazdin made for advancing the psychotherapy research knowledge base, improving patient care, and reducing the gulf between research and practice was to study the mechanisms of therapeutic change. He noted, "The study of mechanisms of change has received the least attention even though understanding mechanisms may well be…

  14. [Benefit of the Internet as an information tool in case of oral contraceptive miss. Survey of 1964 women visiting the website www.g-oubliemapilule.com].

    PubMed

    Lemaître, Sophie; Collier, Francis; Hulin, Vincent

    2009-12-20

    The aim of the study is to evaluate the utility of the website http://www.g-oubliemapilule.com/ that contains the recommendations of the French Haute Autorité de santé in case of oral contraceptive pill missing. This epidemiologic prospective study was conducted using an online questionnaire available at http://www.g-oubliemapilule.com/. The results emphasize the poor quality of information provided by the physicians. 40% of the physicians don't provide information about what to do in case of oral contraceptive pill missing during the first medical visit for oral contraceptive prescription and the physicians don't inquire about oral contraceptive pill missing during the follow-up in 3/4 of the cases. Furthermore, when women find information about what to do in case of oral contraceptive pill missing, a majority of them won't follow the advice provided even if it is fully understood. 60% of the women who should use the condom during the 7 days following the oral contraceptive pill missing don't use it and 86% of the women who should use the emergency contraceptive pill don't use it. The reason mostly invoked (1/3 of the cases) to support that behaviour is the assumption that the risk of pregnancy is too low. The results help to understand the gap between theoretical efficacy (Pearl Index: 0.3%) and real efficacy (8%) of the oral contraceptive pill. Finally, the website http://www.g-oubliemapilule.com/ is a useful well understood additional tool but can't replace the medical follow-up. PMID:20085215

  15. A class of covariate-dependent spatiotemporal covariance functions

    PubMed Central

    Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.

    2014-01-01

    In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199

  16. OPAC Missing Record Retrieval.

    ERIC Educational Resources Information Center

    Johnson, Karl E.

    1996-01-01

    When the Higher Education Library Information Network of Rhode Island transferred members' bibliographic data into a shared online public access catalog (OPAC), 10% of the University of Rhode Island's monograph records were missing. This article describes the consortium's attempts to retrieve records from the database and the effectiveness of…

  17. 'Miss Frances', 'Miss Gail' and 'Miss Sandra' Crapemyrtles

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Agricultural Research Service, United States Department of Agriculture, announces the release to nurserymen of three new crapemyrtle cultivars named 'Miss Gail', 'Miss Frances', and 'Miss Sandra'. ‘Miss Gail’ resulted from a cross-pollination between ‘Catawba’ as the female parent and ‘Arapaho’ ...

  18. Help for Finding Missing Children.

    ERIC Educational Resources Information Center

    McCormick, Kathleen

    1984-01-01

    Efforts to locate missing children have expanded from a federal law allowing for entry of information into an F.B.I. computer system to companion bills before Congress for establishing a national missing child clearinghouse and a Justice Department center to help in conducting searches. Private organizations are also involved. (KS)

  19. Imputation by the mean score should be avoided when validating a Patient Reported Outcomes questionnaire by a Rasch model in presence of informative missing data

    PubMed Central

    2011-01-01

    Background Nowadays, more and more clinical scales consisting in responses given by the patients to some items (Patient Reported Outcomes - PRO), are validated with models based on Item Response Theory, and more specifically, with a Rasch model. In the validation sample, presence of missing data is frequent. The aim of this paper is to compare sixteen methods for handling the missing data (mainly based on simple imputation) in the context of psychometric validation of PRO by a Rasch model. The main indexes used for validation by a Rasch model are compared. Methods A simulation study was performed allowing to consider several cases, notably the possibility for the missing values to be informative or not and the rate of missing data. Results Several imputations methods produce bias on psychometrical indexes (generally, the imputation methods artificially improve the psychometric qualities of the scale). In particular, this is the case with the method based on the Personal Mean Score (PMS) which is the most commonly used imputation method in practice. Conclusions Several imputation methods should be avoided, in particular PMS imputation. From a general point of view, it is important to use an imputation method that considers both the ability of the patient (measured for example by his/her score), and the difficulty of the item (measured for example by its rate of favourable responses). Another recommendation is to always consider the addition of a random process in the imputation method, because such a process allows reducing the bias. Last, the analysis realized without imputation of the missing data (available case analyses) is an interesting alternative to the simple imputation in this context. PMID:21756330

  20. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  1. A discrete time event-history approach to informative drop-out in mixed latent Markov models with covariates.

    PubMed

    Bartolucci, Francesco; Farcomeni, Alessio

    2015-03-01

    Mixed latent Markov (MLM) models represent an important tool of analysis of longitudinal data when response variables are affected by time-fixed and time-varying unobserved heterogeneity, in which the latter is accounted for by a hidden Markov chain. In order to avoid bias when using a model of this type in the presence of informative drop-out, we propose an event-history (EH) extension of the latent Markov approach that may be used with multivariate longitudinal data, in which one or more outcomes of a different nature are observed at each time occasion. The EH component of the resulting model is referred to the interval-censored drop-out, and bias in MLM modeling is avoided by correlated random effects, included in the different model components, which follow common latent distributions. In order to perform maximum likelihood estimation of the proposed model by the expectation-maximization algorithm, we extend the usual forward-backward recursions of Baum and Welch. The algorithm has the same complexity as the one adopted in cases of non-informative drop-out. We illustrate the proposed approach through simulations and an application based on data coming from a medical study about primary biliary cirrhosis in which there are two outcomes of interest, one continuous and the other binary. PMID:25227970

  2. Galilean covariant harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Horzela, Andrzej; Kapuscik, Edward

    1993-01-01

    A Galilean covariant approach to classical mechanics of a single particle is described. Within the proposed formalism, all non-covariant force laws defining acting forces which become to be defined covariantly by some differential equations are rejected. Such an approach leads out of the standard classical mechanics and gives an example of non-Newtonian mechanics. It is shown that the exactly solvable linear system of differential equations defining forces contains the Galilean covariant description of harmonic oscillator as its particular case. Additionally, it is demonstrated that in Galilean covariant classical mechanics the validity of the second Newton law of dynamics implies the Hooke law and vice versa. It is shown that the kinetic and total energies transform differently with respect to the Galilean transformations.

  3. Comparison between estimation of breeding values and fixed effects using Bayesian and empirical BLUP estimation under selection on parents and missing pedigree information

    PubMed Central

    Schenkel, Flávio S; Schaeffer, Lawrence R; Boettcher, Paul J

    2002-01-01

    Bayesian (via Gibbs sampling) and empirical BLUP (EBLUP) estimation of fixed effects and breeding values were compared by simulation. Combinations of two simulation models (with or without effect of contemporary group (CG)), three selection schemes (random, phenotypic and BLUP selection), two levels of heritability (0.20 and 0.50) and two levels of pedigree information (0% and 15% randomly missing) were considered. Populations consisted of 450 animals spread over six discrete generations. An infinitesimal additive genetic animal model was assumed while simulating data. EBLUP and Bayesian estimates of CG effects and breeding values were, in all situations, essentially the same with respect to Spearman's rank correlation between true and estimated values. Bias and mean square error (MSE) of EBLUP and Bayesian estimates of CG effects and breeding values showed the same pattern over the range of simulated scenarios. Methods were not biased by phenotypic and BLUP selection when pedigree information was complete, albeit MSE of estimated breeding values increased for situations where CG effects were present. Estimation of breeding values by Bayesian and EBLUP was similarly affected by joint effect of phenotypic or BLUP selection and randomly missing pedigree information. For both methods, bias and MSE of estimated breeding values and CG effects substantially increased across generations. PMID:11929624

  4. Covariant mutually unbiased bases

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro

    2016-06-01

    The connection between maximal sets of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space and finite phase-space geometries is well known. In this article, we classify MUBs according to their degree of covariance with respect to the natural symmetries of a finite phase-space, which are the group of its affine symplectic transformations. We prove that there exist maximal sets of MUBs that are covariant with respect to the full group only in odd prime-power dimensional spaces, and in this case, their equivalence class is actually unique. Despite this limitation, we show that in dimension 2r covariance can still be achieved by restricting to proper subgroups of the symplectic group, that constitute the finite analogues of the oscillator group. For these subgroups, we explicitly construct the unitary operators yielding the covariance.

  5. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do

  6. A hierarchical nest survival model integrating incomplete temporally varying covariates.

    PubMed

    Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A

    2013-11-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  7. A hierarchical nest survival model integrating incomplete temporally varying covariates

    PubMed Central

    Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  8. A hierarchical nest survival model integrating incomplete temporally varying covariates

    USGS Publications Warehouse

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  9. The covariate-adjusted frequency plot.

    PubMed

    Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K

    2016-04-01

    Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter. PMID:23376964

  10. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  11. Addressing spectroscopic quality of covariant density functional theory

    NASA Astrophysics Data System (ADS)

    Afanasjev, A. V.

    2015-03-01

    The spectroscopic quality of covariant density functional theory has been accessed by analyzing the accuracy and theoretical uncertainties in the description of spectroscopic observables. Such analysis is first presented for the energies of the single-particle states in spherical and deformed nuclei. It is also shown that the inclusion of particle-vibration coupling improves the description of the energies of predominantly single-particle states in medium and heavy-mass spherical nuclei. However, the remaining differences between theory and experiment clearly indicate missing physics and missing terms in covariant energy density functionals. The uncertainties in the predictions of the position of two-neutron drip line sensitively depend on the uncertainties in the prediction of the energies of the single-particle states. On the other hand, many spectroscopic observables in well deformed nuclei at ground state and finite spin only weakly depend on the choice of covariant energy density functional.

  12. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  13. Missing persons-missing data: the need to collect antemortem dental records of missing persons.

    PubMed

    Blau, Soren; Hill, Anthony; Briggs, Christopher A; Cordner, Stephen M

    2006-03-01

    incorporated into the National Coroners Information System (NCIS) managed, on behalf of Australia's Coroners, by the Victorian Institute of Forensic Medicine. The existence of the NCIS would ensure operational collaboration in the implementation of the system and cost savings to Australian policing agencies involved in missing person inquiries. The implementation of such a database would facilitate timely and efficient reconciliation of clinical and postmortem dental records and have subsequent social and financial benefits. PMID:16566776

  14. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26888093

  15. Misunderstanding analysis of covariance.

    PubMed

    Miller, G A; Chapman, J P

    2001-02-01

    Despite numerous technical treatments in many venues, analysis of covariance (ANCOVA) remains a widely misused approach to dealing with substantive group differences on potential covariates, particularly in psychopathology research. Published articles reach unfounded conclusions, and some statistics texts neglect the issue. The problem with ANCOVA in such cases is reviewed. In many cases, there is no means of achieving the superficially appealing goal of "correcting" or "controlling for" real group differences on a potential covariate. In hopes of curtailing misuse of ANCOVA and promoting appropriate use, a nontechnical discussion is provided, emphasizing a substantive confound rarely articulated in textbooks and other general presentations, to complement the mathematical critiques already available. Some alternatives are discussed for contexts in which ANCOVA is inappropriate or questionable. PMID:11261398

  16. Reconciling Covariances with Reliable Orbital Uncertainty

    NASA Astrophysics Data System (ADS)

    Folcik, Z.; Lue, A.; Vatsky, J.

    2011-09-01

    There is a common suspicion that formal covariances do not represent a realistic measure of orbital uncertainties. By devising metrics for measuring the representations of orbit error, we assess under what circumstances such lore is justified as well as the root cause of the discrepancy between the mathematics of orbital uncertainty and its practical implementation. We offer a scheme by which formal covariances may be adapted to be an accurate measure of orbital uncertainties and show how that adaptation performs against both simulated and real space-object data. We also apply these covariance adaptation methods to the process of observation association using many simulated and real data test cases. We demonstrate that covariance-informed observation association can be reliable, even in the case when only two tracks are available. Satellite breakup and collision event catalog maintenance could benefit from the automation made possible with these association methods.

  17. The covariant chiral ring

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Troost, Jan

    2016-03-01

    We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N = (4 , 4) supersymmetry in two dimensions. For seed target spaces K3 and T 4, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.

  18. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  19. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  20. 78 FR 55123 - Submission for Review: We Need Information About Your Missing Payment, RI 38-31

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... may also be reported to OPM by a telephone call. Analysis Agency: Retirement Operations, Retirement... information on those who are to respond, including through the use of appropriate automated,...

  1. Using Analysis of Covariance (ANCOVA) with Fallible Covariates

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Aguinis, Herman

    2011-01-01

    Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…

  2. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits RAILROAD RETIREMENT BOARD INTERNAL ADMINISTRATION, POLICY AND PROCEDURES USE OF PENALTY MAIL TO ASSIST IN... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about...

  3. What Is Missing in Counseling Research? Reporting Missing Data

    ERIC Educational Resources Information Center

    Sterner, William R.

    2011-01-01

    Missing data have long been problematic in quantitative research. Despite the statistical and methodological advances made over the past 3 decades, counseling researchers fail to provide adequate information on this phenomenon. Interpreting the complex statistical procedures and esoteric language seems to be a contributing factor. An overview of…

  4. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  5. Covariant deformed oscillator algebras

    NASA Technical Reports Server (NTRS)

    Quesne, Christiane

    1995-01-01

    The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.

  6. Partial covariate adjusted regression

    PubMed Central

    Şentürk, Damla; Nguyen, Danh V.

    2008-01-01

    Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed (Şentürk and Müller, 2005). The available data has been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set. PMID:20126296

  7. The Bayesian Covariance Lasso

    PubMed Central

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G.

    2012-01-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data. PMID:24551316

  8. Influence of neglected covariances on the estimation of Earth rotation parameters, geophysical excitation functions and second degree gravity field coefficients

    NASA Astrophysics Data System (ADS)

    Heiker, Andrea; Kutterer, Hansjörg

    2010-05-01

    The Earth rotation variability is redundantly described by the combination of Earth rotation parameters (polar motion and length of day), geophysical excitation functions and second degree gravity field coefficients. There exist some publications regarding the comparison of the Earth rotation parameters and excitation functions. However, most authors do not make use of the redundancy. In addition, existing covariances between the input parameters are not considered. As shown in previous publications we use the redundancy for the independent mutual validation of the Earth rotation parameters, excitation functions and second degree gravity field coefficients based on an extended Gauss-Markov model and least-squares adjustment. The work regarding the mutual validation is performed within the project P9 "Combined analysis and validation of Earth rotation models and observations" of the research Unit FOR 584 ("Earth rotation and global dynamic processes") which is funded by the German Research Unit (DFG); see also abstract "Combined Analysis and Validation of Earth Rotation Models and Observations". The adjustment model is determined at first by the joint functional relations between the parameters and second by the stochastic model of the input data. A variance-covariance component estimation is included in the adjustment model. The functional model is based on the linearized Euler-Liouville equation. The construction of an appropriate stochastic model is prevented in practice by insufficient knowledge on variances and covariances. However, some numerical results derived from arbitrarily chosen stochastic models indicate that the stochastic model may be crucial for a correct estimation. The missing information is approximated by analyzing the input data. Synthetic variance-covariance matrices are constructed by considering empirical auto- and cross-correlation functions. The influence of neglected covariances is quantified and discussed by comparing the results derived

  9. Impact of the 235U Covariance Data in Benchmark Calculations

    SciTech Connect

    Leal, Luiz C; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems.

  10. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  11. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  12. Covariance Analysis of Gamma Ray Spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-01

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  13. Covariance analysis of gamma ray spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-15

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  14. Covariant magnetic connection hypersurfaces

    NASA Astrophysics Data System (ADS)

    Pegoraro, F.

    2016-04-01

    > In the single fluid, non-relativistic, ideal magnetohydrodynamic (MHD) plasma description, magnetic field lines play a fundamental role by defining dynamically preserved `magnetic connections' between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D magnetic connection hypersurfaces in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when .

  15. OD Covariance in Conjunction Assessment: Introduction and Issues

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.; Duncan, M.

    2015-01-01

    Primary and secondary covariances combined and projected into conjunction plane (plane perpendicular to relative velocity vector at TCA) Primary placed on x-axis at (miss distance, 0) and represented by circle of radius equal to sum of both spacecraft circumscribing radiiZ-axis perpendicular to x-axis in conjunction plane Pc is portion of combined error ellipsoid that falls within the hard-body radius circle

  16. Covariance-enhanced discriminant analysis

    PubMed Central

    XU, PEIRONG; ZHU, JI; ZHU, LIXING; LI, YI

    2016-01-01

    Summary Linear discriminant analysis has been widely used to characterize or separate multiple classes via linear combinations of features. However, the high dimensionality of features from modern biological experiments defies traditional discriminant analysis techniques. Possible interfeature correlations present additional challenges and are often underused in modelling. In this paper, by incorporating possible interfeature correlations, we propose a covariance-enhanced discriminant analysis method that simultaneously and consistently selects informative features and identifies the corresponding discriminable classes. Under mild regularity conditions, we show that the method can achieve consistent parameter estimation and model selection, and can attain an asymptotically optimal misclassification rate. Extensive simulations have verified the utility of the method, which we apply to a renal transplantation trial.

  17. Web-Based Self-Assessment Health Tools: Who Are the Users and What Is the Impact of Missing Input Information?

    PubMed Central

    Cobain, Mark R; Newson, Rachel S

    2014-01-01

    Background Web-based health applications, such as self-assessment tools, can aid in the early detection and prevention of diseases. However, there are concerns as to whether such tools actually reach users with elevated disease risk (where prevention efforts are still viable), and whether inaccurate or missing information on risk factors may lead to incorrect evaluations. Objective This study aimed to evaluate (1) evaluate whether a Web-based cardiovascular disease (CVD) risk communication tool (Heart Age tool) was reaching users at risk of developing CVD, (2) the impact of awareness of total cholesterol (TC), HDL-cholesterol (HDL-C), and systolic blood pressure (SBP) values on the risk estimates, and (3) the key predictors of awareness and reporting of physiological risk factors. Methods Heart Age is a tool available via a free open access website. Data from 2,744,091 first-time users aged 21-80 years with no prior heart disease were collected from 13 countries in 2009-2011. Users self-reported demographic and CVD risk factor information. Based on these data, an individual’s 10-year CVD risk was calculated according to Framingham CVD risk models and translated into a Heart Age. This is the age for which the individual’s reported CVD risk would be considered “normal”. Depending on the availability of known TC, HDL-C, and SBP values, different algorithms were applied. The impact of awareness of TC, HDL-C, and SBP values on Heart Age was determined using a subsample that had complete risk factor information. Results Heart Age users (N=2,744,091) were mostly in their 20s (22.76%) and 40s (23.99%), female (56.03%), had multiple (mean 2.9, SD 1.4) risk factors, and a Heart Age exceeding their chronological age (mean 4.00, SD 6.43 years). The proportion of users unaware of their TC, HDL-C, or SBP values was high (77.47%, 93.03%, and 46.55% respectively). Lacking awareness of physiological risk factor values led to overestimation of Heart Age by an average 2

  18. Stardust Navigation Covariance Analysis

    NASA Astrophysics Data System (ADS)

    Menon, Premkumar R.

    2000-01-01

    The Stardust spacecraft was launched on February 7, 1999 aboard a Boeing Delta-II rocket. Mission participants include the National Aeronautics and Space Administration (NASA), the Jet Propulsion Laboratory (JPL), Lockheed Martin Astronautics (LMA) and the University of Washington. The primary objective of the mission is to collect in-situ samples of the coma of comet Wild-2 and return those samples to the Earth for analysis. Mission design and operational navigation for Stardust is performed by the Jet Propulsion Laboratory (JPL). This paper will describe the extensive JPL effort in support of the Stardust pre-launch analysis of the orbit determination component of the mission covariance study. A description of the mission and it's trajectory will be provided first, followed by a discussion of the covariance procedure and models. Predicted accuracy's will be examined as they relate to navigation delivery requirements for specific critical events during the mission. Stardust was launched into a heliocentric trajectory in early 1999. It will perform an Earth Gravity Assist (EGA) on January 15, 2001 to acquire an orbit for the eventual rendezvous with comet Wild-2. The spacecraft will fly through the coma (atmosphere) on the dayside of Wild-2 on January 2, 2004. At that time samples will be obtained using an aerogel collector. After the comet encounter Stardust will return to Earth when the Sample Return Capsule (SRC) will separate and land at the Utah Test Site (UTTR) on January 15, 2006. The spacecraft will however be deflected off into a heliocentric orbit. The mission is divided into three phases for the covariance analysis. They are 1) Launch to EGA, 2) EGA to Wild-2 encounter and 3) Wild-2 encounter to Earth reentry. Orbit determination assumptions for each phase are provided. These include estimated and consider parameters and their associated a-priori uncertainties. Major perturbations to the trajectory include 19 deterministic and statistical maneuvers

  19. COVARIANCE ASSISTED SCREENING AND ESTIMATION

    PubMed Central

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-01-01

    Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X′X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model. PMID:25541567

  20. Covariance Spectroscopy Applied to Nuclear Radiation Detection

    SciTech Connect

    Trainham, R., Tinsley, J., Keegan, R., Quam, W.

    2011-09-01

    Covariance spectroscopy is a method of processing second order moments of data to obtain information that is usually absent from average spectra. In nuclear radiation detection it represents a generalization of nuclear coincidence techniques. Correlations and fluctuations in data encode valuable information about radiation sources, transport media, and detection systems. Gaining access to the extra information can help to untangle complicated spectra, uncover overlapping peaks, accelerate source identification, and even sense directionality. Correlations existing at the source level are particularly valuable since many radioactive isotopes emit correlated gammas and neutrons. Correlations also arise from interactions within detector systems, and from scattering in the environment. In particular, correlations from Compton scattering and pair production within a detector array can be usefully exploited in scenarios where direct measurement of source correlations would be unfeasible. We present a covariance analysis of a few experimental data sets to illustrate the utility of the concept.

  1. Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials

    DOE Data Explorer

    The Low-fidelity Covariance Project (Low-Fi) was funded in FY07-08 by DOEÆs Nuclear Criticality Safety Program (NCSP). The project was a collaboration among ANL, BNL, LANL, and ORNL. The motivation for the Low-Fi project stemmed from an imbalance in supply and demand of covariance data. The interest in, and demand for, covariance data has been in a continual uptrend over the past few years. Requirements to understand application-dependent uncertainties in simulated quantities of interest have led to the development of sensitivity / uncertainty and data adjustment software such as TSUNAMI [1] at Oak Ridge. To take full advantage of the capabilities of TSUNAMI requires general availability of covariance data. However, the supply of covariance data has not been able to keep up with the demand. This fact is highlighted by the observation that the recent release of the much-heralded ENDF/B-VII.0 included covariance data for only 26 of the 393 neutron evaluations (which is, in fact, considerably less covariance data than was included in the final ENDF/B-VI release).[Copied from R.C. Little et al., "Low-Fidelity Covariance Project", Nuclear Data Sheets 109 (2008) 2828-2833] The Low-Fi covariance data are now available at the National Nuclear Data Center. They are separate from ENDF/B-VII.0 and the NNDC warns that this information is not approved by CSEWG. NNDC describes the contents of this collection as: "Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn

  2. Invariance of covariances arises out of noise

    NASA Astrophysics Data System (ADS)

    Grytskyy, D.; Tetzlaff, T.; Diesmann, M.; Helias, M.

    2013-01-01

    Correlated neural activity is a known feature of the brain [1] and evidence increases that it is closely linked to information processing [2]. The temporal shape of covariances has early been related to synaptic interactions and to common input shared by pairs of neurons [3]. Recent theoretical work explains the small magnitude of covariances in inhibition dominated recurrent networks by active decorrelation [4, 5, 6]. For binary neurons the mean-field approach takes random fluctuations into account to accurately predict the average activity in such networks [7] and expressions for covariances follow from a master equation [8], both briefly reviewed here for completeness. In our recent work we have shown how to map different network models, including binary networks, onto linear dynamics [9]. Binary neurons with a strong non-linear Heaviside gain function are inaccessible to the classical treatment [8]. Here we show how random fluctuations generated by the network effectively linearize the system and implement a self-regulating mechanism, that renders population-averaged covariances independent of the interaction strength and keeps the system away from instability.

  3. The incredible shrinking covariance estimator

    NASA Astrophysics Data System (ADS)

    Theiler, James

    2012-05-01

    Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.

  4. Covariant Electrodynamics in Vacuum

    NASA Astrophysics Data System (ADS)

    Wilhelm, H. E.

    1990-05-01

    The generalized Galilei covariant Maxwell equations and their EM field transformations are applied to the vacuum electrodynamics of a charged particle moving with an arbitrary velocity v in an inertial frame with EM carrier (ether) of velocity w. In accordance with the Galilean relativity principle, all velocities have absolute meaning (relative to the ether frame with isotropic light propagation), and the relative velocity of two bodies is defined by the linear relation uG = v1 - v2. It is shown that the electric equipotential surfaces of a charged particle are compressed in the direction parallel to its relative velocity v - w (mechanism for physical length contraction of bodies). The magnetic field H(r, t) excited in the ether by a charge e moving uniformly with velocity v is related to its electric field E(r, t) by the equation H=ɛ0(v - w)xE/[ 1 +w • (t>- w)/c20], which shows that (i) a magnetic field is excited only if the charge moves relative to the ether, and (ii) the magnetic field is weak if v - w is not comparable to the velocity of light c0 . It is remarkable that a charged particle can excite EM shock waves in the ether if |i> - w > c0. This condition is realizable for anti-parallel charge and ether velocities if |v-w| > c0- | w|, i.e., even if |v| is subluminal. The possibility of this Cerenkov effect in the ether is discussed for terrestrial and galactic situations

  5. Replacing a Missing Tooth

    MedlinePlus

    ... majority of patients with clefts will require full orthodontic treatment, especially if the cleft has passed through ... later replacement of the missing lateral incisor. During orthodontic treatment, an artificial tooth may be attached to ...

  6. Linear covariance analysis for gimbaled pointing systems

    NASA Astrophysics Data System (ADS)

    Christensen, Randall S.

    Linear covariance analysis has been utilized in a wide variety of applications. Historically, the theory has made significant contributions to navigation system design and analysis. More recently, the theory has been extended to capture the combined effect of navigation errors and closed-loop control on the performance of the system. These advancements have made possible rapid analysis and comprehensive trade studies of complicated systems ranging from autonomous rendezvous to vehicle ascent trajectory analysis. Comprehensive trade studies are also needed in the area of gimbaled pointing systems where the information needs are different from previous applications. It is therefore the objective of this research to extend the capabilities of linear covariance theory to analyze the closed-loop navigation and control of a gimbaled pointing system. The extensions developed in this research include modifying the linear covariance equations to accommodate a wider variety of controllers. This enables the analysis of controllers common to gimbaled pointing systems, with internal states and associated dynamics as well as actuator command filtering and auxiliary controller measurements. The second extension is the extraction of power spectral density estimates from information available in linear covariance analysis. This information is especially important to gimbaled pointing systems where not just the variance but also the spectrum of the pointing error impacts the performance. The extended theory is applied to a model of a gimbaled pointing system which includes both flexible and rigid body elements as well as input disturbances, sensor errors, and actuator errors. The results of the analysis are validated by direct comparison to a Monte Carlo-based analysis approach. Once the developed linear covariance theory is validated, analysis techniques that are often prohibitory with Monte Carlo analysis are used to gain further insight into the system. These include the creation

  7. The missing link.

    PubMed

    Dracup, Kathleen

    2002-06-01

    The uniqueness of nursing research is derived from the philosophical view of the individual as a biopsychosocial being. Nurse scientists are prepared to illuminate the linkages among the biophysiological, psychological, and social domains, and this study is much enhanced by the increasing availability of valid and reliable biomarkers. Researchers need to develop expertise in the use of biomarkers and secure appropriate funding for their use. Missing links may be missing no longer. PMID:12122766

  8. Missing Drivers with Dementia: Antecedents and Recovery

    PubMed Central

    Rowe, Meredeth A.; Greenblum, Catherine A.; Boltz, Marie; Galvin, James E.

    2013-01-01

    OBJECTIVES To determine the circumstance in which persons with dementia become lost while driving, how missing drivers are found, and how Silver Alert notificationsare instrumental in those discoveries. DESIGN A retrospective, descriptive study. SETTING Retrospective record review. PARTICIPANTS Conducted using 156 records from the Florida Silver Alert program for the time period October, 2008 through May 2010. These alerts were issued in Florida for a missing driver with dementia. MEASUREMENTS Information derived from the reports on characteristics of the missing driver, antecedents to missing event and discovery of a missing driver. RESULTS and CONCLUSION The majority of missing drivers were males, with ages ranging from 58’94, who were being cared for by a spouse. Most drivers became lost on routine, caregiver-sanctioned trips to usual locations. Only 15% were in the act of driving when found with most being found in or near a parked car and the large majority were found by law enforcement officers. Only 40% were found in the county they went missing and 10% were found in a different state. Silver Alert notifications were most effective for law enforcement; citizen alerts resulted in a few discoveries. There was a 5% mortality rate in the study population with those living alone more likely to be found dead than alive. An additional 15% were found in dangerous situations such as stopped on railroad tracks. Thirty-two percent had documented driving or dangerous errors such as, driving thewrong way or into secluded areas, or walking in or near roadways. PMID:23134069

  9. Sensitivity of missing values in classification tree for large sample

    NASA Astrophysics Data System (ADS)

    Hasan, Norsida; Adam, Mohd Bakri; Mustapha, Norwati; Abu Bakar, Mohd Rizam

    2012-05-01

    Missing values either in predictor or in response variables are a very common problem in statistics and data mining. Cases with missing values are often ignored which results in loss of information and possible bias. The objectives of our research were to investigate the sensitivity of missing data in classification tree model for large sample. Data were obtained from one of the high level educational institutions in Malaysia. Students' background data were randomly eliminated and classification tree was used to predict students degree classification. The results showed that for large sample, the structure of the classification tree was sensitive to missing values especially for sample contains more than ten percent missing values.

  10. Covariant Closed String Coherent States

    SciTech Connect

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops.

  11. Covariant closed string coherent states.

    PubMed

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops. PMID:21405564

  12. Covariance tracking: architecture optimizations for embedded systems

    NASA Astrophysics Data System (ADS)

    Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan

    2014-12-01

    Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.

  13. Development of covariance capabilities in EMPIRE code

    SciTech Connect

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  14. Development of Covariance Capabilities in EMPIRE Code

    SciTech Connect

    Herman, M. Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-12-15

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  15. RNA sequence analysis using covariance models.

    PubMed Central

    Eddy, S R; Durbin, R

    1994-01-01

    We describe a general approach to several RNA sequence analysis problems using probabilistic models that flexibly describe the secondary structure and primary sequence consensus of an RNA sequence family. We call these models 'covariance models'. A covariance model of tRNA sequences is an extremely sensitive and discriminative tool for searching for additional tRNAs and tRNA-related sequences in sequence databases. A model can be built automatically from an existing sequence alignment. We also describe an algorithm for learning a model and hence a consensus secondary structure from initially unaligned example sequences and no prior structural information. Models trained on unaligned tRNA examples correctly predict tRNA secondary structure and produce high-quality multiple alignments. The approach may be applied to any family of small RNA sequences. Images PMID:8029015

  16. Covariance Matrix Evaluations for Independent Mass Fission Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.; Sumini, M.

    2015-01-01

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of 235U(nth, f) and 239Pu(nth, f) reactions.

  17. Covariance Matrix Evaluations for Independent Mass Fission Yields

    SciTech Connect

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.

  18. Restoration of HST images with missing data

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Missing data are a fairly common problem when restoring Hubble Space Telescope observations of extended sources. On Wide Field and Planetary Camera images cosmic ray hits and CCD hot spots are the prevalent causes of data losses, whereas on Faint Object Camera images data are lossed due to reseaux marks, blemishes, areas of saturation and the omnipresent frame edges. This contribution discusses a technique for 'filling in' missing data by statistical inference using information from the surrounding pixels. The major gain consists in minimizing adverse spill-over effects to the restoration in areas neighboring those where data are missing. When the mask delineating the support of 'missing data' is made dynamic, cosmic ray hits, etc. can be detected on the fly during restoration.

  19. Shrinkage estimators for covariance matrices.

    PubMed

    Daniels, M J; Kass, R E

    2001-12-01

    Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically

  20. Automatic Classification of Variable Stars in Catalogs with Missing Data

    NASA Astrophysics Data System (ADS)

    Pichara, Karim; Protopapas, Pavlos

    2013-11-01

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  1. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    SciTech Connect

    Pichara, Karim; Protopapas, Pavlos

    2013-11-10

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  2. Missing Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Hough, S. E.; Martin, S.

    2013-12-01

    The occurrence of three earthquakes with Mw greater than 8.8, and six earthquakes larger than Mw8.5, since 2004 has raised interest in the long-term rate of great earthquakes. Past studies have focused on rates since 1900, which roughly marks the start of the instrumental era. Yet substantial information is available for earthquakes prior to 1900. A re-examination of the catalog of global historical earthquakes reveals a paucity of Mw ≥ 8.5 events during the 18th and 19th centuries compared to the rate during the instrumental era (Hough, 2013, JGR), suggesting that the magnitudes of some documented historical earthquakes have been underestimated, with approximately half of all Mw≥8.5 earthquakes missing or underestimated in the 19th century. Very large (Mw≥8.5) magnitudes have traditionally been estimated for historical earthquakes only from tsunami observations given a tautological assumption that all such earthquakes generate significant tsunamis. Magnitudes would therefore tend to be underestimated for deep megathrust earthquakes that generated relatively small tsunamis, deep earthquakes within continental collision zones, earthquakes that produced tsunamis that were not documented, outer rise events, and strike-slip earthquakes such as the 11 April 2012 Sumatra event. We further show that, where magnitudes of historical earthquakes are estimated from earthquake intensities using the Bakun and Wentworth (1997, BSSA) method, magnitudes of great earthquakes can be significantly underestimated. Candidate 'missing' great 19th century earthquakes include the 1843 Lesser Antilles earthquake, which recent studies suggest was significantly larger than initial estimates (Feuillet et al., 2012, JGR; Hough, 2013), and an 1841 Kamchatka event, for which Mw9 was estimated by Gusev and Shumilina (2004, Izv. Phys. Solid Ear.). We consider cumulative moment release rates during the 19th century compared to that during the 20th and 21st centuries, using both the Hough

  3. Missed opportunities in crystallography.

    PubMed

    Dauter, Zbigniew; Jaskolski, Mariusz

    2014-09-01

    Scrutinized from the perspective of time, the giants in the history of crystallography more than once missed a nearly obvious chance to make another great discovery, or went in the wrong direction. This review analyzes such missed opportunities focusing on macromolecular crystallographers (using Perutz, Pauling, Franklin as examples), although cases of particular historical (Kepler), methodological (Laue, Patterson) or structural (Pauling, Ramachandran) relevance are also described. Linus Pauling, in particular, is presented several times in different circumstances, as a man of vision, oversight, or even blindness. His example underscores the simple truth that also in science incessant creativity is inevitably connected with some probability of fault. PMID:24814223

  4. Partial covariance mapping techniques at FELs

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek

    2014-05-01

    The development of free-electron lasers (FELs) is driven by the desire to access the structure and chemical dynamics of biomolecules with atomic resolution. Short, intense FEL pulses have the potential to record x-ray diffraction images before the molecular structure is destroyed by radiation damage. However, even during the shortest, few-femtosecond pulses currently available, there are some significant changes induced by massive ionisation and onset of Coulomb explosion. To interpret the diffraction images it is vital to gain insight into the electronic and nuclear dynamics during multiple core and valence ionisations that compete with Auger cascades. This paper focuses on a technique that is capable to probe these processes. The covariance mapping technique is well suited to the high intensity and low repetition rate of FEL pulses. While the multitude of charges ejected at each pulse overwhelm conventional coincidence methods, an improved technique of partial covariance mapping can cope with hundreds of photoelectrons or photoions detected at each FEL shot. The technique, however, often reveals spurious, uninteresting correlations that spoil the maps. This work will discuss the strengths and limitations of various forms of covariance mapping techniques. Quantitative information extracted from the maps will be linked to theoretical modelling of ionisation and fragmentation paths. Special attention will be given to critical experimental parameters, such as counting rate, FEL intensity fluctuations, vacuum impurities or detector efficiency and nonlinearities. Methods of assessing and optimising signal-to-noise ratio will be described. Emphasis will be put on possible future developments such as multidimensional covariance mapping, compensation for various experimental instabilities and improvements in the detector response. This work has been supported the EPSRC, UK (grants EP/F021232/1 and EP/I032517/1).

  5. Miss Dove Rediviva.

    ERIC Educational Resources Information Center

    Hawley, Richard A.

    1995-01-01

    Suggests that a way out of the current malaise of American education may be to locate educational excellence in accessible American fiction. Discusses Frances Gray Patton's "Good Morning, Miss Dove," in which the central character is an elementary school geography teacher. (RS)

  6. The impact of sociodemographic, treatment, and work support on missed work after breast cancer diagnosis

    PubMed Central

    Mujahid, Mahasin S.; Janz, Nancy K.; Hawley, Sarah T.; Griggs, Jennifer J.; Hamilton, Ann S.; Katz, Steven J.

    2016-01-01

    Work loss is a potential adverse consequence of cancer. There is limited research on patterns and correlates of paid work after diagnosis of breast cancer, especially among ethnic minorities. Women with non-metastatic breast cancer diagnosed from June 2005 to May 2006 who reported to the Los Angeles County SEER registry were identified and asked to complete the survey after initial treatment (median time from diagnosis = 8.9 months). Latina and African American women were over-sampled. Analyses were restricted to women working at the time of diagnosis, <65 years of age, and who had complete covariate information (N = 589). The outcome of the study was missed paid work (≤ month, >1 month, stopped all together). Approximately 44, 24, and 32% of women missed ≤1 month, >1 month, or stopped working, respectively. African Americans and Latinas were more likely to stop working when compared with Whites [OR for stop working vs. missed ≤1 month: 3.0, 3.4, (P < 0.001), respectively]. Women receiving mastectomy and those receiving chemotherapy were also more likely to stop working, independent of sociodemographic and treatment factors [ORs for stopped working vs. missed ≤1 month: 4.2, P < 0.001; 7.9, P < 0.001, respectively]. Not having a flexible work schedule available through work was detrimental to working [ORs for stopped working 18.9, P < 0.001 after adjusting for sociodemographic and treatment factors]. Many women stop working altogether after a diagnosis of breast cancer, particularly if they are racial/ethnic minorities, receive chemotherapy, or those who are employed in an unsupportive work settings. Health care providers need to be aware of these adverse consequences of breast cancer diagnosis and initial treatment. PMID:19360466

  7. Hirarchical Bayesian Spatio-Temporal Interpolation including Covariates

    NASA Astrophysics Data System (ADS)

    Hussain, Ijaz; Mohsin, Muhammad; Spoeck, Gunter; Pilz, Juergen

    2010-05-01

    The space-time interpolation of precipitation has significant contribution to river control,reservoir operations, forestry interest and flash flood watches etc. The changes in environmental covariates and spatial covariates make space-time estimation of precipitation a challenging task. In our earlier paper [1], we used transformed hirarchical Bayesian sapce-time interpolation method for predicting the amount of precipiation. In present paper, we modified the [2] method to include covarites which varaies with respect to space-time. The proposed method is applied to estimating space-time monthly precipitation in the monsoon periods during 1974 - 2000. The 27-years monthly average data of precipitation, temperature, humidity and wind speed are obtained from 51 monitoring stations in Pakistan. The average monthly precipitation is used response variable and temperature, humidity and wind speed are used as time varying covariates. Moreovere the spatial covarites elevation, latitude and longitude of same monitoring stations are also included. The cross-validation method is used to compare the results of transformed hierarchical Bayesian spatio-temporal interpolation with and without including environmental and spatial covariates. The software of [3] is modified to incorprate enviornmental covariates and spatil covarites. It is observed that the transformed hierarchical Bayesian method including covarites provides more accuracy than the transformed hierarchical Bayesian method without including covarites. Moreover, the five potential monitoring cites are selected based on maximum entropy sampaling design approach. References [1] I.Hussain, J.Pilz,G. Spoeck and H.L.Yu. Spatio-Temporal Interpolation of Precipitation during Monsoon Periods in Pakistan. submitted in Advances in water Resources,2009. [2] N.D. Le, W. Sun, and J.V. Zidek, Bayesian multivariate spatial interpolation with data missing by design. Journal of the Royal Statistical Society. Series B (Methodological

  8. Are Eddy Covariance series stationary?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spectral analysis via a discrete Fourier transform is used often to examine eddy covariance series for cycles (eddies) of interest. Generally the analysis is performed on hourly or half-hourly data sets collected at 10 or 20 Hz. Each original series is often assumed to be stationary. Also automated ...

  9. Gaussian covariance matrices for anisotropic galaxy clustering measurements

    NASA Astrophysics Data System (ADS)

    Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio

    2016-04-01

    Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.

  10. New capabilities for processing covariance data in resonance region

    SciTech Connect

    Wiarda, D.; Dunn, M. E.; Greene, N. M.; Larson, N. M.; Leal, L. C.

    2006-07-01

    The AMPX [1] code system is a modular system of FORTRAN computer programs that relate to nuclear analysis with a primary emphasis on tasks associated with the production and use of multi group and continuous energy cross sections. The module PUFF-III within this code system handles the creation of multi group covariance data from ENDF information. The resulting covariances are saved in COVERX format [2]. We recently expanded the capabilities of PUFF-III to include full handling of covariance data in the resonance region (resolved as well as unresolved). The new program handles all resonance covariance formats in File 32 except for the long-range covariance sub sections. The new program has been named PUFF-IV. To our knowledge, PUFF-IV is the first processing code that can address both the new ENDF format for resolved resonance parameters and the new ENDF 'compact' covariance format. The existing code base was rewritten in Fortran 90 to allow for a more modular design. Results are identical between the new and old versions within rounding errors, where applicable. Automatic test cases have been added to ensure that consistent results are generated across computer systems. (authors)

  11. Realization of the optimal phase-covariant quantum cloning machine

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2005-12-15

    In several quantum information (QI) phenomena of large technological importance the information is carried by the phase of the quantum superposition states, or qubits. The phase-covariant cloning machine (PQCM) addresses precisely the problem of optimally copying these qubits with the largest attainable 'fidelity'. We present a general scheme which realizes the 1{yields}3 phase covariant cloning process by a combination of three different QI processes: the universal cloning, the NOT gate, and the projection over the symmetric subspace of the output qubits. The experimental implementation of a PQCM for polarization encoded qubits, the first ever realized with photons, is reported.

  12. 40 CFR 98.85 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) or the annual organic carbon content of raw materials are missing, facilities must undertake a new... each missing value of monthly raw material consumption the substitute data value must be the best available estimate of the monthly raw material consumption based on information used for accounting...

  13. Multiple imputation of covariates by fully conditional specification: Accommodating the substantive model

    PubMed Central

    Seaman, Shaun R; White, Ian R; Carpenter, James R

    2015-01-01

    Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available. PMID:24525487

  14. Missing people, migrants, identification and human rights.

    PubMed

    Nuzzolese, E

    2012-11-01

    The increasing volume and complexities of migratory flow has led to a range of problems such as human rights issues, public health, disease and border control, and also the regulatory processes. As result of war or internal conflicts missing person cases and management have to be regarded as a worldwide issue. On the other hand, even in peace, the issue of a missing person is still relevant. In 2007 the Italian Ministry of Interior nominated an extraordinary commissar in order to analyse and assess the total number of unidentified recovered bodies and verify the extent of the phenomena of missing persons, reported as 24,912 people in Italy (updated 31 December 2011). Of these 15,632 persons are of foreigner nationalities and are still missing. The census of the unidentified bodies revealed a total of 832 cases recovered in Italy since the year 1974. These bodies/human remains received a regular autopsy and were buried as 'corpse without name". In Italy judicial autopsy is performed to establish cause of death and identity, but odontology and dental radiology is rarely employed in identification cases. Nevertheless, odontologists can substantiate the identification through the 'biological profile' providing further information that can narrow the search to a smaller number of missing individuals even when no ante mortem dental data are available. The forensic dental community should put greater emphasis on the role of the forensic odontology as a tool for humanitarian action of unidentified individuals and best practise in human identification. PMID:23221266

  15. Minimal unitary (covariant) scattering theory

    SciTech Connect

    Lindesay, J.V.; Markevich, A.

    1983-06-01

    In the minimal three particle equations developed by Lindesay the two body input amplitude was an on shell relativistic generalization of the non-relativistic scattering model characterized by a single mass parameter ..mu.. which in the two body (m + m) system looks like an s-channel bound state (..mu.. < 2m) or virtual state (..mu.. > 2m). Using this driving term in covariant Faddeev equations generates a rich covariant and unitary three particle dynamics. However, the simplest way of writing the relativisitic generalization of the Faddeev equations can take the on shell Mandelstam parameter s = 4(q/sup 2/ + m/sup 2/), in terms of which the two particle input is expressed, to negative values in the range of integration required by the dynamics. This problem was met in the original treatment by multiplying the two particle input amplitude by THETA(s). This paper provides what we hope to be a more direct way of meeting the problem.

  16. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  17. Realistic Covariance Prediction For the Earth Science Constellations

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra

  18. Covariant jump conditions in electromagnetism

    NASA Astrophysics Data System (ADS)

    Itin, Yakov

    2012-02-01

    A generally covariant four-dimensional representation of Maxwell's electrodynamics in a generic material medium can be achieved straightforwardly in the metric-free formulation of electromagnetism. In this setup, the electromagnetic phenomena are described by two tensor fields, which satisfy Maxwell's equations. A generic tensorial constitutive relation between these fields is an independent ingredient of the theory. By use of different constitutive relations (local and non-local, linear and non-linear, etc.), a wide area of applications can be covered. In the current paper, we present the jump conditions for the fields and for the energy-momentum tensor on an arbitrarily moving surface between two media. From the differential and integral Maxwell equations, we derive the covariant boundary conditions, which are independent of any metric and connection. These conditions include the covariantly defined surface current and are applicable to an arbitrarily moving smooth curved boundary surface. As an application of the presented jump formulas, we derive a Lorentzian type metric as a condition for existence of the wave front in isotropic media. This result holds for ordinary materials as well as for metamaterials with negative material constants.

  19. The Impact of Missing Data on Sample Reliability Estimates: Implications for Reliability Reporting Practices

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2004-01-01

    A method for incorporating maximum likelihood (ML) estimation into reliability analyses with item-level missing data is outlined. An ML estimate of the covariance matrix is first obtained using the expectation maximization (EM) algorithm, and coefficient alpha is subsequently computed using standard formulae. A simulation study demonstrated that…

  20. Comparison of Modern Methods for Analyzing Repeated Measures Data with Missing Values

    ERIC Educational Resources Information Center

    Vallejo, G.; Fernandez, M. P.; Livacic-Rojas, P. E.; Tuero-Herrero, E.

    2011-01-01

    Missing data are a pervasive problem in many psychological applications in the real world. In this article we study the impact of dropout on the operational characteristics of several approaches that can be easily implemented with commercially available software. These approaches include the covariance pattern model based on an unstructured…

  1. Examination of various roles for covariance matrices in the development, evaluation, and application of nuclear data

    SciTech Connect

    Smith, D.L.

    1988-01-01

    The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurement standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs.

  2. Estimated Environmental Exposures for MISSE-3 and MISSE-4

    NASA Technical Reports Server (NTRS)

    Finckenor, Miria M.; Pippin, Gary; Kinard, William H.

    2008-01-01

    Describes the estimated environmental exposure for MISSE-2 and MISSE-4. These test beds, attached to the outside of the International Space Station, were planned for 3 years of exposure. This was changed to 1 year after MISSE-1 and -2 were in space for 4 years. MISSE-3 and -4 operate in a low Earth orbit space environment, which exposes them to a variety of assaults including atomic oxygen, ultraviolet radiation, particulate radiation, thermal cycling, and meteoroid/space debris impact, as well as contamination associated with proximity to an active space station. Measurements and determinations of atomic oxygen fluences, solar UV exposure levels, molecular contamination levels, and particulate radiation are included.

  3. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  4. Connecting Math and Motion: A Covariational Approach

    NASA Astrophysics Data System (ADS)

    Culbertson, Robert J.; Thompson, A. S.

    2006-12-01

    We define covariational reasoning as the ability to correlate changes in two connected variables. For example, the ability to describe the height of fluid in an odd-shaped vessel as a function of fluid volume requires covariational reasoning skills. Covariational reasoning ability is an essential resource for gaining a deep understanding of the physics of motion. We have developed an approach for teaching physical science to in-service math and science high school teachers that emphasizes covariational reasoning. Several examples of covariation and results from a small cohort of local teachers will be presented.

  5. Recurrence Analysis of Eddy Covariance Fluxes

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael

    2015-04-01

    The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.

  6. Covariance Structure Models for Gene Expression Microarray Data

    ERIC Educational Resources Information Center

    Xie, Jun; Bentler, Peter M.

    2003-01-01

    Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…

  7. Identifying Heat Waves in Florida: Considerations of Missing Weather Data

    PubMed Central

    Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.

    2015-01-01

    Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198

  8. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  9. Data Covariances from R-Matrix Analyses of Light Nuclei

    SciTech Connect

    Hale, G.M. Paris, M.W.

    2015-01-15

    After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances ({sup 5}He) and with many resonances ({sup 13}C ). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.

  10. Data Covariances from R-Matrix Analyses of Light Nuclei

    NASA Astrophysics Data System (ADS)

    Hale, G. M.; Paris, M. W.

    2015-01-01

    After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances (5He) and with many resonances (13C). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.

  11. What Darwin missed

    NASA Astrophysics Data System (ADS)

    Campbell, A. K.

    2003-07-01

    Throughout his life, Fred Hoyle had a keen interest in evolution. He argued that natural selection by small, random change, as conceived by Charles Darwin and Alfred Russel Wallace, could not explain either the origin of life or the origin of a new protein. The idea of natural selection, Hoyle told us, wasn't even Darwin's original idea in the first place. Here, in honour of Hoyle's analysis, I propose a solution to Hoyle's dilemma. His solution was life from space - panspermia. But the real key to understanding natural selection is `molecular biodiversity'. This explains the things Darwin missed - the origin of species and the origin of extinction. It is also a beautiful example of the mystery disease that afflicted Darwin for over 40 years, for which we now have an answer.

  12. Commonly missed orthopedic problems.

    PubMed

    Ballas, M T; Tytko, J; Mannarino, F

    1998-01-15

    When not diagnosed early and managed appropriately, common musculoskeletal injuries may result in long-term disabling conditions. Anterior cruciate ligament tears are some of the most common knee ligament injuries. Slipped capital femoral epiphysis may present with little or no hip pain, and subtle or absent physical and radiographic findings. Femoral neck stress fractures, if left untreated, may result in avascular necrosis, refractures and pseudoarthrosis. A delay in diagnosis of scaphoid fractures may cause early wrist arthrosis if nonunion results. Ulnar collateral ligament tears are a frequently overlooked injury in skiers. The diagnosis of Achilles tendon rupture is missed as often as 25 percent of the time. Posterior tibial tendon tears may result in fixed bony planus if diagnosis is delayed, necessitating hindfoot fusion rather than simple soft tissue repair. Family physicians should be familiar with the initial assessment of these conditions and, when appropriate, refer patients promptly to an orthopedic surgeon. PMID:9456991

  13. The Impact of Nonignorable Missing Data on the Inference of Regression Coefficients.

    ERIC Educational Resources Information Center

    Min, Kyung-Seok; Frank, Kenneth A.

    Various statistical methods have been available to deal with missing data problems, but the difficulty is that they are based on somewhat restrictive assumptions that missing patterns are known or can be modeled with auxiliary information. This paper treats the presence of missing cases from the viewpoint that generalization as a sample does not…

  14. The Concept of Missing Incidents in Persons with Dementia

    PubMed Central

    Rowe, Meredeth; Houston, Amy; Molinari, Victor; Bulat, Tatjana; Bowen, Mary Elizabeth; Spring, Heather; Mutolo, Sandra; McKenzie, Barbara

    2015-01-01

    Behavioral symptoms of dementia often present the greatest challenge for informal caregivers. One behavior, that is a constant concern for caregivers, is the person with dementia leaving a designated area such that their whereabouts become unknown to the caregiver or a missing incident. Based on an extensive literature review and published findings of their own research, members of the International Consortium on Wandering and Missing Incidents constructed a preliminary missing incidents model. Examining the evidence base, specific factors within each category of the model were further described, reviewed and modified until consensus was reached regarding the final model. The model begins to explain in particular the variety of antecedents that are related to missing incidents. The model presented in this paper is designed to be heuristic and may be used to stimulate discussion and the development of effective preventative and response strategies for missing incidents among persons with dementia. PMID:27417817

  15. Parameter inference with estimated covariance matrices

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heavens, Alan F.

    2016-02-01

    When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.

  16. FILIF Measurements of HCHO Vertical Gradients and Flux via Eddy Covariance during BEACHON-ROCS 2010

    NASA Astrophysics Data System (ADS)

    Digangi, J. P.; Boyle, E.; Henry, S. B.; Keutsch, F. N.; Beachon-Rocs Science Team

    2010-12-01

    Models of HOx chemistry in rural (low NOx) environments can drastically underpredict OH concentrations compared to measurements. In addition, models of OH reactivity based on modeled VOC emissions also underpredict OH reactivity. The combination of these facts implies a significant misunderstanding of HOx chemistry in rural environments. Formaldehyde (HCHO) is one of the most ubiquitous VOC oxidation products and therefore is an important tracer of VOC oxidation. Formaldehyde may be formed via the fast oxidation of biogenic VOCs (BVOCs), such as isoprene and terpenes emitted from forests, giving a measure of any potential missing VOCs as a cause of the inconsistency in OH reactivity. Also, as the loss pathways of HCHO are well understood, HCHO concentrations can provide further information about OH concentrations. As a result, measurements of HCHO gradients and fluxes in pristine forests can provide valuable insight into this rural HOx chemistry. We present the first reported measurements of HCHO flux via eddy covariance, as well as HCHO concentrations and gradients as observed by the Madison FIber Laser-Induced Fluorescence (FILIF) Instrument during the BEACHON-ROCS 2010 campaign in a rural coniferous forest northwest of Colorado Springs, CO. Midday upward HCHO fluxes as high as 150 μg/m2/hr were observed. These results will be discussed in the context of rapid in-canopy BVOC oxidation and the uncertainties in the HOx budget inside forest canopies.

  17. Patient Portals as a Means of Information and Communication Technology Support to Patient-Centric Care Coordination – the Missing Evidence and the Challenges of Evaluation

    PubMed Central

    Georgiou, Andrew; Hyppönen, Hannele; Ammenwerth, Elske; de Keizer, Nicolette; Magrabi, Farah; Scott, Philip

    2015-01-01

    Summary Objectives To review the potential contribution of Information and Communication Technology (ICT) to enable patient-centric and coordinated care, and in particular to explore the role of patient portals as a developing ICT tool, to assess the available evidence, and to describe the evaluation challenges. Methods Reviews of IMIA, EFMI, and other initiatives, together with literature reviews. Results We present the progression from care coordination to care integration, and from patient-centric to person-centric approaches. We describe the different roles of ICT as an enabler of the effective presentation of information as and when needed. We focus on the patient’s role as a co-producer of health as well as the focus and purpose of care. We discuss the need for changing organisational processes as well as the current mixed evidence regarding patient portals as a logical tool, and the reasons for this dichotomy, together with the evaluation principles supported by theoretical frameworks so as to yield robust evidence. Conclusions There is expressed commitment to coordinated care and to putting the patient in the centre. However to achieve this, new interactive patient portals will be needed to enable peer communication by all stakeholders including patients and professionals. Few portals capable of this exist to date. The evaluation of these portals as enablers of system change, rather than as simple windows into electronic records, is at an early stage and novel evaluation approaches are needed. PMID:26123909

  18. Kepler's missing planets

    NASA Astrophysics Data System (ADS)

    Steffen, Jason H.

    2013-08-01

    We investigate the distributions of the orbital period ratios of adjacent planets in high-multiplicity Kepler systems (four or more planets) and low-multiplicity systems (two planets). Modelling the low-multiplicity sample as essentially equivalent to the high-multiplicity sample, but with unobserved intermediate planets, we find some evidence for an excess of planet pairs between the 2:1 and 3:1 mean-motion resonances in the low-multiplicity sample. This possible excess may be the result of strong dynamical interactions near these or other resonances or it may be a byproduct of other evolutionary events or processes such as planetary collisions. Three-planet systems show a significant excess of planets near the 2:1 mean-motion resonance that is not as prominent in either of the other samples. This observation may imply a correlation between strong dynamical interactions and observed planet number - perhaps a relationship between resonance pairs and the inclinations or orbital periods of additional planets. The period ratio distributions can also be used to identify targets to search for missing planets in the each of the samples, the presence or absence of which would have strong implications for planet formation and dynamical evolution models.

  19. Efficient retrieval of landscape Hessian: forced optimal covariance adaptive learning.

    PubMed

    Shir, Ofer M; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel

    2014-06-01

    Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳10^{4}). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes. PMID:25019911

  20. Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    George, E., Chan, K.

    2012-09-01

    Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for

  1. Some further results on incorporating risk factor information in assessing the dependence between paired failure times arising from case-control family studies: an application to prostate cancer.

    PubMed

    Hsu, Li; Prentice, Ross L; Stanford, Janet L

    2002-03-30

    In a typical case-control family study, detailed risk factor information is often collected on cases and controls, but not on their relatives for reasons of cost and logistical difficulty in locating the relatives. The impact of missing risk factor information for relatives on estimation of the strength of dependence between the disease risk of pairs of relatives is largely unknown. In this paper, we extend our earlier work on estimating the dependence of ages at onset between paired relatives from case-control family data to include covariates on cases and controls, and possibly relatives. Using population-based case-control families as our basic data structure, we study the effect of missing covariates for relatives and/or cases and controls on the bias of certain dependence parameter estimators via a simulation study. Finally we illustrate various analyses using a case-control family study of early onset prostate cancer. PMID:11870822

  2. The impact of aging on gray matter structural covariance networks.

    PubMed

    Montembeault, Maxime; Joubert, Sven; Doyon, Julien; Carrier, Julie; Gagnon, Jean-François; Monchi, Oury; Lungu, Ovidiu; Belleville, Sylvie; Brambati, Simona Maria

    2012-11-01

    Previous anatomical volumetric studies have shown that healthy aging is associated with gray matter tissue loss in specific cerebral regions. However, these studies may have potentially missed critical elements of age-related brain changes, which largely exist within interrelationships among brain regions. This magnetic resonance imaging research aims to assess the effects of aging on the organization of gray matter structural covariance networks. Here, we used voxel-based morphometry on high-definition brain scans to compare the patterns of gray matter structural covariance networks that sustain different sensorimotor and high-order cognitive functions among young (n=88, mean age=23.5±3.1 years, female/male=55/33) and older (n=88, mean age=67.3±5.9 years, female/male=55/33) participants. This approach relies on the assumption that functionally correlated brain regions show correlations in gray matter volume as a result of mutually trophic influences or common experience-related plasticity. We found reduced structural association in older adults compared with younger adults, specifically in high-order cognitive networks. Major differences were observed in the structural covariance networks that subserve the following: a) the language-related semantic network, b) the executive control network, and c) the default-mode network. Moreover, these cognitive functions are typically altered in the older population. Our results indicate that healthy aging alters the structural organization of cognitive networks, shifting from a more distributed (in young adulthood) to a more localized topological organization in older individuals. PMID:22776455

  3. Quality Quantification of Evaluated Cross Section Covariances

    SciTech Connect

    Varet, S.; Dossantos-Uzarralde, P.

    2015-01-15

    Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.

  4. REGRESSION METHODS FOR DATA WITH INCOMPLETE COVARIATES

    EPA Science Inventory

    Modern statistical methods in chronic disease epidemiology allow simultaneous regression of disease status on several covariates. hese methods permit examination of the effects of one covariate while controlling for those of others that may be causally related to the disease. owe...

  5. Particle emission from covariant phase space

    SciTech Connect

    Bambah, B.A. )

    1992-12-01

    Using Lorentz-covariant sources, we calculate the multiplicity distribution of {ital n} pair correlated particles emerging from a Lorentz-covariant phase-space volume. We use the Kim-Wigner formalism and identify these sources as the squeezed states of a relativistic harmonic oscillator. The applications of this to multiplicity distributions in particle physics is discussed.

  6. Group Theory of Covariant Harmonic Oscillators

    ERIC Educational Resources Information Center

    Kim, Y. S.; Noz, Marilyn E.

    1978-01-01

    A simple and concrete example for illustrating the properties of noncompact groups is presented. The example is based on the covariant harmonic-oscillator formalism in which the relativistic wave functions carry a covariant-probability interpretation. This can be used in a group theory course for graduate students who have some background in…

  7. Bispectrum covariance in the flat-sky limit

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Shi, X.; Schneider, P.

    2009-12-01

    Aims. To probe cosmological fields beyond the Gaussian level, three-point statistics can be used, all of which are related to the bispectrum. Hence, measurements of CMB anisotropies, galaxy clustering, and weak gravitational lensing alike have to rely upon an accurate theoretical background concerning the bispectrum and its noise properties. If only small portions of the sky are considered, it is often desirable to perform the analysis in the flat-sky limit. We aim at a formal, detailed derivation of the bispectrum covariance in the flat-sky approximation, focusing on a pure two-dimensional Fourier-plane approach. Methods: We define an unbiased estimator of the bispectrum, which takes the average over the overlap of annuli in Fourier space, and compute its full covariance. The outcome of our formalism is compared to the flat-sky spherical harmonic approximation in terms of the covariance, the behavior under parity transformations, and the information content. We introduce a geometrical interpretation of the averaging process in the estimator, thus providing an intuitive understanding. Results: Contrary to foregoing work, we find a difference by a factor of two between the covariances of the Fourier-plane and the spherical harmonic approach. We argue that this discrepancy can be explained by the differing behavior with respect to parity. However, in an exemplary analysis it is demonstrated that the Fisher information of both formalisms agrees to high accuracy. Via the geometrical interpretation we are able to link the normalization in the bispectrum estimator to the area enclosed by the triangle configuration at consideration as well as to the Wigner symbol, which leads to convenient approximation formulae for the covariances of both approaches.

  8. Fully Bayesian inference under ignorable missingness in the presence of auxiliary covariates

    PubMed Central

    Daniels, M.J.; Wang, C.; Marcus, B.H.

    2014-01-01

    In order to make a missing at random (MAR) or ignorability assumption realistic, auxiliary covariates are often required. However, the auxiliary covariates are not desired in the model for inference. Typical multiple imputation approaches do not assume that the imputation model marginalizes to the inference model. This has been termed ‘uncongenial’ (Meng, 1994). In order to make the two models congenial (or compatible), we would rather not assume a parametric model for the marginal distribution of the auxiliary covariates, but we typically do not have enough data to estimate the joint distribution well non-parametrically. In addition, when the imputation model uses a non-linear link function (e.g., the logistic link for a binary response), the marginalization over the auxiliary covariates to derive the inference model typically results in a difficult to interpret form for effect of covariates. In this article, we propose a fully Bayesian approach to ensure that the models are compatible for incomplete longitudinal data by embedding an interpretable inference model within an imputation model and that also addresses the two complications described above. We evaluate the approach via simulations and implement it on a recent clinical trial. PMID:24571539

  9. Effect modification by time-varying covariates.

    PubMed

    Robins, James M; Hernán, Miguel A; Rotnitzky, Andrea

    2007-11-01

    Marginal structural models (MSMs) allow estimation of effect modification by baseline covariates, but they are less useful for estimating effect modification by evolving time-varying covariates. Rather, structural nested models (SNMs) were specifically designed to estimate effect modification by time-varying covariates. In their paper, Petersen et al. (Am J Epidemiol 2007;166:985-993) describe history-adjusted MSMs as a generalized form of MSM and argue that history-adjusted MSMs allow a researcher to easily estimate effect modification by time-varying covariates. However, history-adjusted MSMs can result in logically incompatible parameter estimates and hence in contradictory substantive conclusions. Here the authors propose a more restrictive definition of history-adjusted MSMs than the one provided by Petersen et al. and compare the advantages and disadvantages of using history-adjusted MSMs, as opposed to SNMs, to examine effect modification by time-dependent covariates. PMID:17875581

  10. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  11. Covariance matrices for use in criticality safety predictability studies

    SciTech Connect

    Derrien, H.; Larson, N.M.; Leal, L.C.

    1997-09-01

    Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.

  12. Treatment decisions based on scalar and functional baseline covariates.

    PubMed

    Ciarleglio, Adam; Petkova, Eva; Ogden, R Todd; Tarpey, Thaddeus

    2015-12-01

    The amount and complexity of patient-level data being collected in randomized-controlled trials offer both opportunities and challenges for developing personalized rules for assigning treatment for a given disease or ailment. For example, trials examining treatments for major depressive disorder are not only collecting typical baseline data such as age, gender, or scores on various tests, but also data that measure the structure and function of the brain such as images from magnetic resonance imaging (MRI), functional MRI (fMRI), or electroencephalography (EEG). These latter types of data have an inherent structure and may be considered as functional data. We propose an approach that uses baseline covariates, both scalars and functions, to aid in the selection of an optimal treatment. In addition to providing information on which treatment should be selected for a new patient, the estimated regime has the potential to provide insight into the relationship between treatment response and the set of baseline covariates. Our approach can be viewed as an extension of "advantage learning" to include both scalar and functional covariates. We describe our method and how to implement it using existing software. Empirical performance of our method is evaluated with simulated data in a variety of settings and also applied to data arising from a study of patients with major depressive disorder from whom baseline scalar covariates as well as functional data from EEG are available. PMID:26111145

  13. A Simulation-Based Comparison of Covariate Adjustment Methods for the Analysis of Randomized Controlled Trials

    PubMed Central

    Chaussé, Pierre; Liu, Jin; Luta, George

    2016-01-01

    Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. PMID:27077870

  14. Kettlewell's Missing Evidence.

    ERIC Educational Resources Information Center

    Allchin, Douglas Kellogg

    2002-01-01

    The standard textbook account of Kettlewell and the peppered moths omits significant information. Suggests that this case can be used to reflect on the role of simplification in science teaching. (Author/MM)

  15. Modelling categorical covariates in Bayesian disease mapping by partition structures.

    PubMed

    Giudici, P; Knorr-Held, L; Rasser, G

    We consider the problem of mapping the risk from a disease using a series of regional counts of observed and expected cases, and information on potential risk factors. To analyse this problem from a Bayesian viewpoint, we propose a methodology which extends a spatial partition model by including categorical covariate information. Such an extension allows detection of clusters in the residual variation, reflecting further, possibly unobserved, covariates. The methodology is implemented by means of reversible jump Markov chain Monte Carlo sampling. An application is presented in order to illustrate and compare our proposed extensions with a purely spatial partition model. Here we analyse a well-known data set on lip cancer incidence in Scotland. PMID:10960873

  16. Modeling missing data in knowledge space theory.

    PubMed

    de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio

    2015-12-01

    Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data. PMID:26651988

  17. A Comet's Missing Light

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-05-01

    On 28 November 2013, comet C/2012 S1 better known as comet ISON should have passed within two solar radii of the Suns surface as it reached perihelion in its orbit. But instead of shining in extreme ultraviolet (EUV) wavelengths as it grazed the solar surface, the comet was never detected by EUV instruments. What happened to comet ISON?Missing EmissionWhen a sungrazing comet passes through the solar corona, it leaves behind a trail of molecules evaporated from its surface. Some of these molecules emit EUV light, which can be detected by instruments on telescopes like the space-based Solar Dynamics Observatory (SDO).Comet ISON, a comet that arrived from deep space and was predicted to graze the Suns corona in November 2013, was expected to cause EUV emission during its close passage. But analysis of the data from multiple telescopes that tracked ISON in EUV including SDO reveals no sign of it at perihelion.In a recent study, Paul Bryans and DeanPesnell, scientists from NCARs High Altitude Observatory and NASA Goddard Space Flight Center, try to determine why ISON didnt display this expected emission.Comparing ISON and LovejoyIn December 2011, another comet dipped into the Suns corona: comet Lovejoy. This image, showingthe orbit Lovejoy took around the Sun, is a composite of SDO images of the pre- and post-perihelion phases of the orbit. Click for a closer look! The dashed part of the curve represents where Lovejoy passed out of view behind the Sun. [Bryans Pesnell 2016]This is not the first time weve watched a sungrazing comet with EUV-detecting telescopes: Comet Lovejoy passed similarly close to the Sun in December 2011. But when Lovejoy grazed the solar corona, it emitted brightly in EUV. So why didnt ISON? Bryans and Pesnell argue that there are two possibilities:the coronal conditions experienced by the two comets were not similar, orthe two comets themselves were not similar.To establish which factor is the most relevant, the authors first demonstrate that both

  18. Missing gene identification using functional coherence scores

    PubMed Central

    Chitale, Meghana; Khan, Ishita K.; Kihara, Daisuke

    2016-01-01

    Reconstructing metabolic and signaling pathways is an effective way of interpreting a genome sequence. A challenge in a pathway reconstruction is that often genes in a pathway cannot be easily found, reflecting current imperfect information of the target organism. In this work, we developed a new method for finding missing genes, which integrates multiple features, including gene expression, phylogenetic profile, and function association scores. Particularly, for considering function association between candidate genes and neighboring proteins to the target missing gene in the network, we used Co-occurrence Association Score (CAS) and PubMed Association Score (PAS), which are designed for capturing functional coherence of proteins. We showed that adding CAS and PAS substantially improve the accuracy of identifying missing genes in the yeast enzyme-enzyme network compared to the cases when only the conventional features, gene expression, phylogenetic profile, were used. Finally, it was also demonstrated that the accuracy improves by considering indirect neighbors to the target enzyme position in the network using a proper network-topology-based weighting scheme. PMID:27552989

  19. Depression and literacy are important factors for missed appointments.

    PubMed

    Miller-Matero, Lisa Renee; Clark, Kalin Burkhardt; Brescacin, Carly; Dubaybo, Hala; Willens, David E

    2016-09-01

    Multiple variables are related to missed clinic appointments. However, the prevalence of missed appointments is still high suggesting other factors may play a role. The purpose of this study was to investigate the relationship between missed appointments and multiple variables simultaneously across a health care system, including patient demographics, psychiatric symptoms, cognitive functioning and literacy status. Chart reviews were conducted on 147 consecutive patients who were seen by a primary care psychologist over a six month period and completed measures to determine levels of depression, anxiety, sleep, cognitive functioning and health literacy. Demographic information and rates of missed appointments were also collected from charts. The average rate of missed appointments was 15.38%. In univariate analyses, factors related to higher rates of missed appointments included younger age (p = .03), lower income (p = .05), probable depression (p = .05), sleep difficulty (p = .05) and limited reading ability (p = .003). There were trends for a higher rate of missed appointments for patients identifying as black (p = .06), government insurance (p = .06) and limited math ability (p = .06). In a multivariate model, probable depression (p = .02) and limited reading ability (p = .003) were the only independent predictors. Depression and literacy status may be the most important factors associated with missed appointments. Implications are discussed including regular screening for depression and literacy status as well as interventions that can be utilized to help improve the rate of missed appointments. PMID:26695719

  20. Missing: Students' Global Outlook

    ERIC Educational Resources Information Center

    Alemu, Daniel S.

    2010-01-01

    While schools are focusing excessively on meeting accountability standards and improving test scores, important facets of schooling--such as preparing students for the global marketplace--are being inadvertently overlooked. Without deliberate informal observation by teachers and school administrators, detecting and addressing students'…

  1. Missed opportunities in child healthcare

    PubMed Central

    Jonker, Linda

    2014-01-01

    Background Various policies in health, such as Integrated Management of Childhood Illnesses, were introduced to enhance integrated service delivery in child healthcare. During clinical practice the researcher observed that integrated services may not be rendered. Objectives This article describes the experiences of mothers that utilised comprehensive child health services in the Cape Metropolitan area of South Africa. Services included treatment for diseases; preventative interventions such as immunisation; and promotive interventions, such as improvement in nutrition and promotion of breastfeeding. Method A qualitative, descriptive phenomenological approach was applied to explore the experiences and perceptions of mothers and/or carers utilising child healthcare services. Thirty percent of the clinics were selected purposively from the total population. A convenience purposive non-probability sampling method was applied to select 17 mothers who met the criteria and gave written consent. Interviews were conducted and recorded digitally using an interview guide. The data analysis was done using Tesch's eight step model. Results Findings of the study indicated varied experiences. Not all mothers received information about the Road to Health book or card. According to the mothers, integrated child healthcare services were not practised. The consequences were missed opportunities in immunisation, provision of vitamin A, absence of growth monitoring, feeding assessment and provision of nutritional advice. Conclusion There is a need for simple interventions such as oral rehydration, early recognition and treatment of diseases, immunisation, growth monitoring and appropriate nutrition advice. These services were not offered diligently. Such interventions could contribute to reducing the incidence of child morbidity and mortality. PMID:26245404

  2. Determination of Resonance Parameters and their Covariances from Neutron Induced Reaction Cross Section Data

    SciTech Connect

    Schillebeeckx, P.; Becker, B.; Danon, Y.; Guber, K.; Harada, H.; Heyse, J.; Junghans, A.R.; Kopecky, S.; Massimi, C.; Moxon, M.C.; Otuka, N.; Sirakov, I.; Volev, K.

    2012-12-15

    Cross section data in the resolved and unresolved resonance region are represented by nuclear reaction formalisms using parameters which are determined by fitting them to experimental data. Therefore, the quality of evaluated cross sections in the resonance region strongly depends on the experimental data used in the adjustment process and an assessment of the experimental covariance data is of primary importance in determining the accuracy of evaluated cross section data. In this contribution, uncertainty components of experimental observables resulting from total and reaction cross section experiments are quantified by identifying the metrological parameters involved in the measurement, data reduction and analysis process. In addition, different methods that can be applied to propagate the covariance of the experimental observables (i.e. transmission and reaction yields) to the covariance of the resonance parameters are discussed and compared. The methods being discussed are: conventional uncertainty propagation, Monte Carlo sampling and marginalization. It is demonstrated that the final covariance matrix of the resonance parameters not only strongly depends on the type of experimental observables used in the adjustment process, the experimental conditions and the characteristics of the resonance structure, but also on the method that is used to propagate the covariances. Finally, a special data reduction concept and format is presented, which offers the possibility to store the full covariance information of experimental data in the EXFOR library and provides the information required to perform a full covariance evaluation.

  3. Epigenetic Contribution to Covariance Between Relatives

    PubMed Central

    Tal, Omri; Kisdi, Eva; Jablonka, Eva

    2010-01-01

    Recent research has pointed to the ubiquity and abundance of between-generation epigenetic inheritance. This research has implications for assessing disease risk and the responses to ecological stresses and also for understanding evolutionary dynamics. An important step toward a general evaluation of these implications is the identification and estimation of the amount of heritable, epigenetic variation in populations. While methods for modeling the phenotypic heritable variance contributed by culture have already been developed, there are no comparable methods for nonbehavioral epigenetic inheritance systems. By introducing a model that takes epigenetic transmissibility (the probability of transmission of ancestral phenotypes) and environmental induction into account, we provide novel expressions for covariances between relatives. We have combined a classical quantitative genetics approach with information about the number of opportunities for epigenetic reset between generations and assumptions about environmental induction to estimate the heritable epigenetic variance and epigenetic transmissibility for both asexual and sexual populations. This assists us in the identification of phenotypes and populations in which epigenetic transmission occurs and enables a preliminary quantification of their transmissibility, which could then be followed by genomewide association and QTL studies. PMID:20100941

  4. Missed nursing care: a qualitative study.

    PubMed

    Kalisch, Beatrice J

    2006-01-01

    The purpose of this study was to determine nursing care regularly missed on medical-surgical units and reasons for missed care. Nine elements of regularly missed nursing care (ambulation, turning, delayed or missed feedings, patient teaching, discharge planning, emotional support, hygiene, intake and output documentation, and surveillance) and 7 themes relative to the reasons for missing this care were reported by nursing staff. PMID:16985399

  5. Covariation bias in panic-prone individuals.

    PubMed

    Pauli, P; Montoya, P; Martz, G E

    1996-11-01

    Covariation estimates between fear-relevant (FR; emergency situations) or fear-irrelevant (FI; mushrooms and nudes) stimuli and an aversive outcome (electrical shock) were examined in 10 high-fear (panic-prone) and 10 low-fear respondents. When the relation between slide category and outcome was random (illusory correlation), only high-fear participants markedly overestimated the contingency between FR slides and shocks. However, when there was a high contingency of shocks following FR stimuli (83%) and a low contingency of shocks following FI stimuli (17%), the group difference vanished. Reversal of contingencies back to random induced a covariation bias for FR slides in high- and low-fear respondents. Results indicate that panic-prone respondents show a covariation bias for FR stimuli and that the experience of a high contingency between FR slides and aversive outcomes may foster such a covariation bias even in low-fear respondents. PMID:8952200

  6. Mean backscattering properties of random radar targets - A polarimetric covariance matrix concept

    NASA Astrophysics Data System (ADS)

    Ziegler, V.; Lueneburg, E.; Schroth, A.

    A polarimetric covariance matrix concept which describes the polarimetric backscattering features of reciprocal random radar targets is presented. The polarization dependence of second-order radar observables can be obtained by unitary similarity transformations of the covariance matrix. Invariant target parameters, such as the minimum and maximum eigenvalues or the eigenvalue difference of the covariance matrix, are introduced, providing information on the randomness of a target and the polarimetric features of the radar observables. An analytical formulation of the problem of optimal polarizations for the mean copolar and crosspolar power return is derived. As a result, the operational computation of optimal polarizations within large data sets becomes feasible.

  7. Some thoughts on positive definiteness in the consideration of nuclear data covariance matrices

    SciTech Connect

    Geraldo, L.P.; Smith, D.L.

    1988-01-01

    Some basic mathematical features of covariance matrices are reviewed, particularly as they relate to the property of positive difiniteness. Physical implications of positive definiteness are also discussed. Consideration is given to an examination of the origins of non-positive definite matrices, to procedures which encourage the generation of positive definite matrices and to the testing of covariance matrices for positive definiteness. Attention is also given to certain problems associated with the construction of covariance matrices using information which is obtained from evaluated data files recorded in the ENDF format. Examples are provided to illustrate key points pertaining to each of the topic areas covered.

  8. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    NASA Astrophysics Data System (ADS)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  9. Phase-covariant quantum cloning of qudits

    SciTech Connect

    Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin

    2003-02-01

    We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation.

  10. Noncommutative Gauge Theory with Covariant Star Product

    SciTech Connect

    Zet, G.

    2010-08-04

    We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.

  11. Covariant action for type IIB supergravity

    NASA Astrophysics Data System (ADS)

    Sen, Ashoke

    2016-07-01

    Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.

  12. Covariate analysis of bivariate survival data

    SciTech Connect

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  13. Autofocusing searches in jets plus missing energy

    SciTech Connect

    Englert, Christoph; Plehn, Tilman; Schichtel, Peter; Schumann, Steffen

    2011-05-01

    Jets plus missing transverse energy is one of the main search channels for new physics at the LHC. A major limitation lies in our understanding of QCD backgrounds. Using jet merging, we can describe the number of jets in typical background channels in terms of a staircase scaling, including theory uncertainties. The scaling parameter depends on the particles in the final state and on cuts applied. Measuring the staircase scaling will allow us to also predict the effective mass for standard model backgrounds. Based on both observables, we propose an analysis strategy avoiding model-specific cuts, which returns information about the color charge and the mass scale of the underlying new physics.

  14. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  15. Estimated Environmental Exposures for MISSE-3 and MISSE-4

    NASA Technical Reports Server (NTRS)

    Pippin, Gary; Normand, Eugene; Finckenor, Miria

    2008-01-01

    Both modeling techniques and a variety of measurements and observations were used to characterize the environmental conditions experienced by the specimens flown on the MISSE-3 (Materials International Space Station Experiment) and MISSE-4 space flight experiments. On August 3, 2006, astronauts Jeff Williams and Thomas Reiter attached MISSE-3 and -4 to the Quest airlock on ISS, where these experiments were exposed to atomic oxygen (AO), ultraviolet (UV) radiation, particulate radiation, thermal cycling, meteoroid/space debris impact, and the induced environment of an active space station. They had been flown to ISS during the July 2006 STS-121 mission. The two suitcases were oriented so that one side faced the ram direction and one side remained shielded from the atomic oxygen. On August 18,2007, astronauts Clay Anderson and Dave Williams retrieved MISSE-3 and-4 and returned them to Earth at the end of the STS-118 mission. Quantitative values are provided when possible for selected environmental factors. A meteoroid/debris impact survey was performed prior to de-integration at Langley Research Center. AO fluences were calculated based on mass loss and thickness loss of thin polymeric films of known AO reactivity. Radiation was measured with thermoluminescent detectors. Visual inspections under ambient and "black-light" at NASA LaRC, together with optical measurements on selected specimens, were the basis for the initial contamination level assessment.

  16. Cross-Section Covariance Data Processing with the AMPX Module PUFF-IV

    SciTech Connect

    Wiarda, Dorothea; Leal, Luiz C; Dunn, Michael E

    2011-01-01

    The ENDF community is endeavoring to release an updated version of the ENDF/B-VII library (ENDF/B-VII.1). In the new release several new evaluations containing covariance information have been added, as the community strives to add covariance information for use in programs like the TSUNAMI (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation) sequence of SCALE (Ref 1). The ENDF/B formatted files are processed into libraries to be used in transport calculations using the AMPX code system (Ref 2) or the NJOY code system (Ref 3). Both codes contain modules to process covariance matrices: PUFF-IV for AMPX and ERRORR in the case of NJOY. While the cross section processing capability between the two code systems has been widely compared, the same is not true for the covariance processing. This paper compares the results for the two codes using the pre-release version of ENDF/B-VII.1.

  17. Covariance Modifications to Subspace Bases

    SciTech Connect

    Harris, D B

    2008-11-19

    Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the

  18. Implementation of optimal phase-covariant cloning machines

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2007-07-15

    The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarization encoded qubits are discussed.

  19. Construction and use of gene expression covariation matrix

    PubMed Central

    Hennetin, Jérôme; Pehkonen, Petri; Bellis, Michel

    2009-01-01

    . Conclusion This new method, applied to four different large data sets, has allowed us to construct distinct covariation matrices with similar properties. We have also developed a technique to translate these covariation networks into graphical 3D representations and found that the local assignation of the probe sets was conserved across the four chip set models used which encompass three different species (humans, mice, and rats). The application of adapted clustering methods succeeded in delineating six conserved functional regions that we characterized using Gene Ontology information. PMID:19594909

  20. Eddy Covariance Method: Overview of General Guidelines and Conventional Workflow

    NASA Astrophysics Data System (ADS)

    Burba, G. G.; Anderson, D. J.; Amen, J. L.

    2007-12-01

    received from new users of the Eddy Covariance method and relevant instrumentation, and employs non-technical language to be of practical use to those new to this field. Information is provided on theory of the method (including state of methodology, basic derivations, practical formulations, major assumptions and sources of errors, error treatment, and use in non- traditional terrains), practical workflow (e.g., experimental design, implementation, data processing, and quality control), alternative methods and applications, and the most frequently overlooked details of the measurements. References and access to an extended 141-page Eddy Covariance Guideline in three electronic formats are also provided.

  1. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  2. The Board's missing link.

    PubMed

    Montgomery, Cynthia A; Kaufman, Rhonda

    2003-03-01

    If a dam springs several leaks, there are various ways to respond. One could assiduously plug the holes, for instance. Or one could correct the underlying weaknesses, a more sensible approach. When it comes to corporate governance, for too long we have relied on the first approach. But the causes of many governance problems lie well below the surface--specifically, in critical relationships that are not structured to support the players involved. In other words, the very foundation of the system is flawed. And unless we correct the structural problems, surface changes are unlikely to have a lasting impact. When shareholders, management, and the board of directors work together as a system, they provide a powerful set of checks and balances. But the relationship between shareholders and directors is fraught with weaknesses, undermining the entire system's equilibrium. As the authors explain, the exchange of information between these two players is poor. Directors, though elected by shareholders to serve as their agents, aren't individually accountable to the investors. And shareholders--for a variety of reasons--have failed to exert much influence over boards. In the end, directors are left with the Herculean task of faithfully representing shareholders whose preferences are unclear, and shareholders have little say about who represents them and few mechanisms through which to create change. The authors suggest several ways to improve the relationship between shareholders and directors: Increase board accountability by recording individual directors' votes on key corporate resolutions; separate the positions of chairman and CEO; reinvigorate shareholders; and give boards funding to pay for outside experts who can provide perspective on crucial issues. PMID:12632807

  3. Monitoring: The missing piece

    SciTech Connect

    Bjorkland, Ronald

    2013-11-15

    The U.S. National Environmental Policy Act (NEPA) of 1969 heralded in an era of more robust attention to environmental impacts resulting from larger scale federal projects. The number of other countries that have adopted NEPA's framework is evidence of the appeal of this type of environmental legislation. Mandates to review environmental impacts, identify alternatives, and provide mitigation plans before commencement of the project are at the heart of NEPA. Such project reviews have resulted in the development of a vast number of reports and large volumes of project-specific data that potentially can be used to better understand the components and processes of the natural environment and provide guidance for improved and efficient environmental protection. However, the environmental assessment (EA) or the more robust and intensive environmental impact statement (EIS) that are required for most major projects more frequently than not are developed to satisfy the procedural aspects of the NEPA legislation while they fail to provide the needed guidance for improved decision-making. While NEPA legislation recommends monitoring of project activities, this activity is not mandated, and in those situations where it has been incorporated, the monitoring showed that the EIS was inaccurate in direction and/or magnitude of the impact. Many reviews of NEPA have suggested that monitoring all project phases, from the design through the decommissioning, should be incorporated. Information gathered though a well-developed monitoring program can be managed in databases and benefit not only the specific project but would provide guidance how to better design and implement future activities designed to protect and enhance the natural environment. -- Highlights: • NEPA statutes created profound environmental protection legislative framework. • Contrary to intent, NEPA does not provide for definitive project monitoring. • Robust project monitoring is essential for enhanced

  4. A hybrid imputation approach for microarray missing value estimation

    PubMed Central

    2015-01-01

    Background Missing data is an inevitable phenomenon in gene expression microarray experiments due to instrument failure or human error. It has a negative impact on performance of downstream analysis. Technically, most existing approaches suffer from this prevalent problem. Imputation is one of the frequently used methods for processing missing data. Actually many developments have been achieved in the research on estimating missing values. The challenging task is how to improve imputation accuracy for data with a large missing rate. Methods In this paper, induced by the thought of collaborative training, we propose a novel hybrid imputation method, called Recursive Mutual Imputation (RMI). Specifically, RMI exploits global correlation information and local structure in the data, captured by two popular methods, Bayesian Principal Component Analysis (BPCA) and Local Least Squares (LLS), respectively. Mutual strategy is implemented by sharing the estimated data sequences at each recursive process. Meanwhile, we consider the imputation sequence based on the number of missing entries in the target gene. Furthermore, a weight based integrated method is utilized in the final assembling step. Results We evaluate RMI with three state-of-art algorithms (BPCA, LLS, Iterated Local Least Squares imputation (ItrLLS)) on four publicly available microarray datasets. Experimental results clearly demonstrate that RMI significantly outperforms comparative methods in terms of Normalized Root Mean Square Error (NRMSE), especially for datasets with large missing rates and less complete genes. Conclusions It is noted that our proposed hybrid imputation approach incorporates both global and local information of microarray genes, which achieves lower NRMSE values against to any single approach only. Besides, this study highlights the need for considering the imputing sequence of missing entries for imputation methods. PMID:26330180

  5. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  6. Spacetime states and covariant quantum theory

    NASA Astrophysics Data System (ADS)

    Reisenberger, Michael; Rovelli, Carlo

    2002-06-01

    In its usual presentation, classical mechanics appears to give time a very special role. But it is well known that mechanics can be formulated so as to treat the time variable on the same footing as the other variables in the extended configuration space. Such covariant formulations are natural for relativistic gravitational systems, where general covariance conflicts with the notion of a preferred physical-time variable. The standard presentation of quantum mechanics, in turn, again gives time a very special role, raising well known difficulties for quantum gravity. Is there a covariant form of (canonical) quantum mechanics? We observe that the preferred role of time in quantum theory is the consequence of an idealization: that measurements are instantaneous. Canonical quantum theory can be given a covariant form by dropping this idealization. States prepared by noninstantaneous measurements are described by ``spacetime smeared states.'' The theory can be formulated in terms of these states, without making any reference to a special time variable. The quantum dynamics is expressed in terms of the propagator, an object covariantly defined on the extended configuration space.

  7. Linkage analysis of anorexia nervosa incorporating behavioral covariates.

    PubMed

    Devlin, Bernie; Bacanu, Silviu-Alin; Klump, Kelly L; Bulik, Cynthia M; Fichter, Manfred M; Halmi, Katherine A; Kaplan, Allan S; Strober, Michael; Treasure, Janet; Woodside, D Blake; Berrettini, Wade H; Kaye, Walter H

    2002-03-15

    Eating disorders, such as anorexia nervosa (AN) and bulimia nervosa (BN), have genetic and environmental underpinnings. To explore genetic contributions to AN, we measured psychiatric, personality and temperament phenotypes of individuals diagnosed with eating disorders from 196 multiplex families, all accessed through an AN proband, as well as genotyping a battery of 387 short tandem repeat (STR) markers distributed across the genome. On these data we performed a multipoint affected sibling pair (ASP) linkage analysis using a novel method that incorporates covariates. By exploring seven attributes thought to typify individuals with eating disorders, we identified two variables, drive-for-thinness and obsessionality, which delimit populations among the ASPs. For both of these traits, or covariates, there were a cluster of ASPs who have high and concordant values for these traits, in keeping with our expectations for individuals with AN, and other clusters of ASPs who did not meet those expectations. When we incorporated these covariates into the ASP linkage analysis, both jointly and separately, we found several regions of suggestive linkage: one close to genome-wide significance on chromosome 1 (at 210 cM, D1S1660; LOD = 3.46, P = 0.00003), another on chromosome 2 (at 114 cM, D2S1790; LOD = 2.22, P = 0.00070) and a third region on chromosome 13 (at 26 cM, D13S894; LOD = 2.50, P = 0.00035). By comparing our results to those implemented using more standard linkage methods, we find the covariates convey substantial information for the linkage analysis. PMID:11912184

  8. FAST NEUTRON COVARIANCES FOR EVALUATED DATA FILES.

    SciTech Connect

    HERMAN, M.; OBLOZINSKY, P.; ROCHMAN, D.; KAWANO, T.; LEAL, L.

    2006-06-05

    We describe implementation of the KALMAN code in the EMPIRE system and present first covariance data generated for Gd and Ir isotopes. A complete set of covariances, in the full energy range, was produced for the chain of 8 Gadolinium isotopes for total, elastic, capture, total inelastic (MT=4), (n,2n), (n,p) and (n,alpha) reactions. Our correlation matrices, based on combination of model calculations and experimental data, are characterized by positive mid-range and negative long-range correlations. They differ from the model-generated covariances that tend to show strong positive long-range correlations and those determined solely from experimental data that result in nearly diagonal matrices. We have studied shapes of correlation matrices obtained in the calculations and interpreted them in terms of the underlying reaction models. An important result of this study is the prediction of narrow energy ranges with extremely small uncertainties for certain reactions (e.g., total and elastic).

  9. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1977-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  10. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1975-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.