Sequential BART for imputation of missing covariates.
Xu, Dandan; Daniels, Michael J; Winterstein, Almut G
2016-07-01
To conduct comparative effectiveness research using electronic health records (EHR), many covariates are typically needed to adjust for selection and confounding biases. Unfortunately, it is typical to have missingness in these covariates. Just using cases with complete covariates will result in considerable efficiency losses and likely bias. Here, we consider the covariates missing at random with missing data mechanism either depending on the response or not. Standard methods for multiple imputation can either fail to capture nonlinear relationships or suffer from the incompatibility and uncongeniality issues. We explore a flexible Bayesian nonparametric approach to impute the missing covariates, which involves factoring the joint distribution of the covariates with missingness into a set of sequential conditionals and applying Bayesian additive regression trees to model each of these univariate conditionals. Using data augmentation, the posterior for each conditional can be sampled simultaneously. We provide details on the computational algorithm and make comparisons to other methods, including parametric sequential imputation and two versions of multiple imputation by chained equations. We illustrate the proposed approach on EHR data from an affiliated tertiary care institution to examine factors related to hyperglycemia. PMID:26980459
Estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers.
Li, Shanshan; Ning, Yang
2015-09-01
Covariate-specific time-dependent ROC curves are often used to evaluate the diagnostic accuracy of a biomarker with time-to-event outcomes, when certain covariates have an impact on the test accuracy. In many medical studies, measurements of biomarkers are subject to missingness due to high cost or limitation of technology. This article considers estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers. To incorporate the covariate effect, we assume a proportional hazards model for the failure time given the biomarker and the covariates, and a semiparametric location model for the biomarker given the covariates. In the presence of missing biomarkers, we propose a simple weighted estimator for the ROC curves where the weights are inversely proportional to the selection probability. We also propose an augmented weighted estimator which utilizes information from the subjects with missing biomarkers. The augmented weighted estimator enjoys the double-robustness property in the sense that the estimator remains consistent if either the missing data process or the conditional distribution of the missing data given the observed data is correctly specified. We derive the large sample properties of the proposed estimators and evaluate their finite sample performance using numerical studies. The proposed approaches are illustrated using the US Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. PMID:25891918
Gebregziabher, Mulugeta; Langholz, Bryan
2010-01-01
Summary In individually matched case–control studies, when some covariates are incomplete, an analysis based on the complete data may result in a large loss of information both in the missing and completely observed variables. This usually results in a bias and loss of efficiency. In this article, we propose a new method for handling the problem of missing covariate data based on a missing-data-induced intensity approach when the missingness mechanism does not depend on case–control status and show that this leads to a generalization of the missing indicator method. We derive the asymptotic properties of the estimates from the proposed method and, using an extensive simulation study, assess the finite sample performance in terms of bias, efficiency, and 95% confidence coverage under several missing data scenarios. We also make comparisons with complete-case analysis (CCA) and some missing data methods that have been proposed previously. Our results indicate that, under the assumption of predictable missingness, the suggested method provides valid estimation of parameters, is more efficient than CCA, and is competitive with other, more complex methods of analysis. A case–control study of multiple myeloma risk and a polymorphism in the receptor Inter-Leukin-6 (IL-6-α) is used to illustrate our findings. PMID:19751251
Diagnostic Measures for the Cox Regression Model with Missing Covariates
Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Ming-Hui
2015-01-01
Summary This paper investigates diagnostic measures for assessing the influence of observations and model misspecification in the presence of missing covariate data for the Cox regression model. Our diagnostics include case-deletion measures, conditional martingale residuals, and score residuals. The Q-distance is proposed to examine the effects of deleting individual observations on the estimates of finite-dimensional and infinite-dimensional parameters. Conditional martingale residuals are used to construct goodness of fit statistics for testing possible misspecification of the model assumptions. A resampling method is developed to approximate the p-values of the goodness of fit statistics. Simulation studies are conducted to evaluate our methods, and a real data set is analyzed to illustrate their use. PMID:26903666
Quartagno, M; Carpenter, J R
2016-07-30
Recently, multiple imputation has been proposed as a tool for individual patient data meta-analysis with sporadically missing observations, and it has been suggested that within-study imputation is usually preferable. However, such within study imputation cannot handle variables that are completely missing within studies. Further, if some of the contributing studies are relatively small, it may be appropriate to share information across studies when imputing. In this paper, we develop and evaluate a joint modelling approach to multiple imputation of individual patient data in meta-analysis, with an across-study probability distribution for the study specific covariance matrices. This retains the flexibility to allow for between-study heterogeneity when imputing while allowing (i) sharing information on the covariance matrix across studies when this is appropriate, and (ii) imputing variables that are wholly missing from studies. Simulation results show both equivalent performance to the within-study imputation approach where this is valid, and good results in more general, practically relevant, scenarios with studies of very different sizes, non-negligible between-study heterogeneity and wholly missing variables. We illustrate our approach using data from an individual patient data meta-analysis of hypertension trials. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26681666
Comparison of Two Approaches for Handling Missing Covariates in Logistic Regression
ERIC Educational Resources Information Center
Peng, Chao-Ying Joanne; Zhu, Jin
2008-01-01
For the past 25 years, methodological advances have been made in missing data treatment. Most published work has focused on missing data in dependent variables under various conditions. The present study seeks to fill the void by comparing two approaches for handling missing data in categorical covariates in logistic regression: the…
Music Information Services System (MISS).
ERIC Educational Resources Information Center
Rao, Paladugu V.
Music Information Services System (MISS) was developed at the Eastern Illinois University Library to manage the sound recording collection. Operating in a batch mode, MISS keeps track of the inventory of sound recordings, generates necessary catalogs to facilitate the use of the sound recordings, and provides specialized bibliographies of sound…
Schwabe, Inga; Boomsma, Dorret I; Zeeuw, Eveline L de; Berg, Stéphanie M van den
2016-07-01
The often-used ACE model which decomposes phenotypic variance into additive genetic (A), common-environmental (C) and unique-environmental (E) parts can be extended to include covariates. Collection of these variables however often leads to a large amount of missing data, for example when self-reports (e.g. questionnaires) are not fully completed. The usual approach to handle missing covariate data in twin research results in reduced power to detect statistical effects, as only phenotypic and covariate data of individual twins with complete data can be used. Here we present a full information approach to handle missing covariate data that makes it possible to use all available data. A simulation study shows that, independent of missingness scenario, number of covariates or amount of missingness, the full information approach is more powerful than the usual approach. To illustrate the new method, we applied it to test scores on a Dutch national school achievement test (Eindtoets Basisonderwijs) in the final grade of primary school of 990 twin pairs. The effects of school-aggregated measures (e.g. school denomination, pedagogical philosophy, school size) and the effect of the sex of a twin on these test scores were tested. None of the covariates had a significant effect on individual differences in test scores. PMID:26687147
Clustered data analysis under miscategorized ordinal outcomes and missing covariates.
Roy, Surupa; Rana, Subrata; Das, Kalyan
2016-08-15
The primary objective in this article is to look into the analysis of clustered ordinal model where complete information on one or more covariates cease to occur. In addition, we also focus on the analysis of miscategorized data that occur in many situations as outcomes are often classified into a category that does not truly reflect its actual state. A general model structure is assumed to accommodate the information that is obtained via surrogate variables. The theoretical motivation actually developed while encountering an orthodontic data to investigate the effects of age, sex and food habit on the extent of plaque deposit. The model we propose is quite flexible and is capable of tackling those additional noises like miscategorization and missingness, which occur in the data most frequently. A new two-step approach has been proposed to estimate the parameters of model framed. A rigorous simulation study has also been carried out to justify the validity of the model taken up for analysis. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26215983
A Bayesian proportional hazards regression model with non-ignorably missing time-varying covariates
Bradshaw, Patrick T.; Ibrahim, Joseph G.; Gammon, Marilie D.
2010-01-01
Missing covariate data is common in observational studies of time to an event, especially when covariates are repeatedly measured over time. Failure to account for the missing data can lead to bias or loss of efficiency, especially when the data are non-ignorably missing. Previous work has focused on the case of fixed covariates rather than those that are repeatedly measured over the follow-up period, so here we present a selection model that allows for proportional hazards regression with time-varying covariates when some covariates may be non-ignorably missing. We develop a fully Bayesian model and obtain posterior estimates of the parameters via the Gibbs sampler in WinBUGS. We illustrate our model with an analysis of post-diagnosis weight change and survival after breast cancer diagnosis in the Long Island Breast Cancer Study Project (LIBCSP) follow-up study. Our results indicate that post-diagnosis weight gain is associated with lower all-cause and breast cancer specific survival among women diagnosed with new primary breast cancer. Our sensitivity analysis showed only slight differences between models with different assumptions on the missing data mechanism yet the complete case analysis yielded markedly different results. PMID:20960582
ERIC Educational Resources Information Center
Cai, Li; Lee, Taehun
2009-01-01
We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander
2016-06-01
Multiple imputation (MI) has become one of the main procedures used to treat missing data, but the guidelines from the methodological literature are not easily transferred to multilevel research. For models including random slopes, proper MI can be difficult, especially when the covariate values are partially missing. In the present article, we discuss applications of MI in multilevel random-coefficient models, theoretical challenges posed by slope variation, and the current limitations of standard MI software. Our findings from three simulation studies suggest that (a) MI is able to recover most parameters, but is currently not well suited to capture slope variation entirely when covariate values are missing; (b) MI offers reasonable estimates for most parameters, even in smaller samples or when its assumptions are not met; and PMID:25939979
Imputation of missing covariate values in epigenome-wide analysis of DNA methylation data
Wu, Chong; Demerath, Ellen W.; Pankow, James S.; Bressler, Jan; Fornage, Myriam; Grove, Megan L.; Chen, Wei; Guan, Weihua
2016-01-01
ABSTRACT DNA methylation is a widely studied epigenetic mechanism and alterations in methylation patterns may be involved in the development of common diseases. Unlike inherited changes in genetic sequence, variation in site-specific methylation varies by tissue, developmental stage, and disease status, and may be impacted by aging and exposure to environmental factors, such as diet or smoking. These non-genetic factors are typically included in epigenome-wide association studies (EWAS) because they may be confounding factors to the association between methylation and disease. However, missing values in these variables can lead to reduced sample size and decrease the statistical power of EWAS. We propose a site selection and multiple imputation (MI) method to impute missing covariate values and to perform association tests in EWAS. Then, we compare this method to an alternative projection-based method. Through simulations, we show that the MI-based method is slightly conservative, but provides consistent estimates for effect size. We also illustrate these methods with data from the Atherosclerosis Risk in Communities (ARIC) study to carry out an EWAS between methylation levels and smoking status, in which missing cell type compositions and white blood cell counts are imputed. PMID:26890800
Huang, Yangxin; Dagne, Getachew
2012-09-01
It is a common practice to analyze complex longitudinal data using semiparametric nonlinear mixed-effects (SNLME) models with a normal distribution. Normality assumption of model errors may unrealistically obscure important features of subject variations. To partially explain between- and within-subject variations, covariates are usually introduced in such models, but some covariates may often be measured with substantial errors. Moreover, the responses may be missing and the missingness may be nonignorable. Inferential procedures can be complicated dramatically when data with skewness, missing values, and measurement error are observed. In the literature, there has been considerable interest in accommodating either skewness, incompleteness or covariate measurement error in such models, but there has been relatively little study concerning all three features simultaneously. In this article, our objective is to address the simultaneous impact of skewness, missingness, and covariate measurement error by jointly modeling the response and covariate processes based on a flexible Bayesian SNLME model. The method is illustrated using a real AIDS data set to compare potential models with various scenarios and different distribution specifications. PMID:22150787
Erler, Nicole S; Rizopoulos, Dimitris; Rosmalen, Joost van; Jaddoe, Vincent W V; Franco, Oscar H; Lesaffre, Emmanuel M E H
2016-07-30
Incomplete data are generally a challenge to the analysis of most large studies. The current gold standard to account for missing data is multiple imputation, and more specifically multiple imputation with chained equations (MICE). Numerous studies have been conducted to illustrate the performance of MICE for missing covariate data. The results show that the method works well in various situations. However, less is known about its performance in more complex models, specifically when the outcome is multivariate as in longitudinal studies. In current practice, the multivariate nature of the longitudinal outcome is often neglected in the imputation procedure, or only the baseline outcome is used to impute missing covariates. In this work, we evaluate the performance of MICE using different strategies to include a longitudinal outcome into the imputation models and compare it with a fully Bayesian approach that jointly imputes missing values and estimates the parameters of the longitudinal model. Results from simulation and a real data example show that MICE requires the analyst to correctly specify which components of the longitudinal process need to be included in the imputation models in order to obtain unbiased results. The full Bayesian approach, on the other hand, does not require the analyst to explicitly specify how the longitudinal outcome enters the imputation models. It performed well under different scenarios. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27042954
19 CFR 201.3a - Missing children information.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 3 2012-04-01 2012-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...
19 CFR 201.3a - Missing children information.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...
19 CFR 201.3a - Missing children information.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...
19 CFR 201.3a - Missing children information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...
19 CFR 201.3a - Missing children information.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...
Huang, Yangxin; Yan, Chunning; Xing, Dongyuan; Zhang, Nanhua; Chen, Henian
2015-01-01
In longitudinal studies it is often of interest to investigate how a repeatedly measured marker in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time-to-event data. Normality of model errors in longitudinal model is a routine assumption, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain between- and within-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. Moreover, the responses may encounter nonignorable missing. Statistical analysis may be complicated dramatically based on longitudinal-survival joint models where longitudinal data with skewness, missing values, and measurement errors are observed. In this article, we relax the distributional assumptions for the longitudinal models using skewed (parametric) distribution and unspecified (nonparametric) distribution placed by a Dirichlet process prior, and address the simultaneous influence of skewness, missingness, covariate measurement error, and time-to-event process by jointly modeling three components (response process with missing values, covariate process with measurement errors, and time-to-event process) linked through the random-effects that characterize the underlying individual-specific longitudinal processes in Bayesian analysis. The method is illustrated with an AIDS study by jointly modeling HIV/CD4 dynamics and time to viral rebound in comparison with potential models with various scenarios and different distributional specifications. PMID:24905593
2012-01-01
Background Multiple imputation is often used for missing data. When a model contains as covariates more than one function of a variable, it is not obvious how best to impute missing values in these covariates. Consider a regression with outcome Y and covariates X and X2. In 'passive imputation' a value X* is imputed for X and then X2 is imputed as (X*)2. A recent proposal is to treat X2 as 'just another variable' (JAV) and impute X and X2 under multivariate normality. Methods We use simulation to investigate the performance of three methods that can easily be implemented in standard software: 1) linear regression of X on Y to impute X then passive imputation of X2; 2) the same regression but with predictive mean matching (PMM); and 3) JAV. We also investigate the performance of analogous methods when the analysis involves an interaction, and study the theoretical properties of JAV. The application of the methods when complete or incomplete confounders are also present is illustrated using data from the EPIC Study. Results JAV gives consistent estimation when the analysis is linear regression with a quadratic or interaction term and X is missing completely at random. When X is missing at random, JAV may be biased, but this bias is generally less than for passive imputation and PMM. Coverage for JAV was usually good when bias was small. However, in some scenarios with a more pronounced quadratic effect, bias was large and coverage poor. When the analysis was logistic regression, JAV's performance was sometimes very poor. PMM generally improved on passive imputation, in terms of bias and coverage, but did not eliminate the bias. Conclusions Given the current state of available software, JAV is the best of a set of imperfect imputation methods for linear regression with a quadratic or interaction effect, but should not be used for logistic regression. PMID:22489953
Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S
2015-01-01
Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249
MISSE in the Materials and Processes Technical Information System (MAPTIS )
NASA Technical Reports Server (NTRS)
Burns, DeWitt; Finckenor, Miria; Henrie, Ben
2013-01-01
Materials International Space Station Experiment (MISSE) data is now being collected and distributed through the Materials and Processes Technical Information System (MAPTIS) at Marshall Space Flight Center in Huntsville, Alabama. MISSE data has been instrumental in many programs and continues to be an important source of data for the space community. To facilitate great access to the MISSE data the International Space Station (ISS) program office and MAPTIS are working to gather this data into a central location. The MISSE database contains information about materials, samples, and flights along with pictures, pdfs, excel files, word documents, and other files types. Major capabilities of the system are: access control, browsing, searching, reports, and record comparison. The search capabilities will search within any searchable files so even if the desired meta-data has not been associated data can still be retrieved. Other functionality will continue to be added to the MISSE database as the Athena Platform is expanded
Information Gaps: The Missing Links to Learning.
ERIC Educational Resources Information Center
Adams, Carl R.
Communication takes place when a speaker conveys new information to the listener. In second language teaching, information gaps motivate students to use and learn the target language in order to obtain information. The resulting interactive language use may develop affective bonds among the students. A variety of classroom techniques are available…
Funding information technology: a missed market.
Rux, P
1998-10-01
Information technology is driving business and industry into the future. This is the essence of reengineering, process innovation, downsizing, etc. Non-profits, schools, libraries, etc. need to follow or risk efficiency. However, to get their fair share of information technology, they need to help with funding. PMID:10187237
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2012-01-01
Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…
38 CFR 1.705 - Restrictions on use of missing children information.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...
38 CFR 1.705 - Restrictions on use of missing children information.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...
38 CFR 1.705 - Restrictions on use of missing children information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...
38 CFR 1.705 - Restrictions on use of missing children information.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...
38 CFR 1.705 - Restrictions on use of missing children information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Restrictions on use of missing children information. 1.705 Section 1.705 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... § 1.705 Restrictions on use of missing children information. Missing children pictures...
Informed conditioning on clinical covariates increases power in case-control association studies.
Zaitlen, Noah; Lindström, Sara; Pasaniuc, Bogdan; Cornelis, Marilyn; Genovese, Giulio; Pollack, Samuela; Barton, Anne; Bickeböller, Heike; Bowden, Donald W; Eyre, Steve; Freedman, Barry I; Friedman, David J; Field, John K; Groop, Leif; Haugen, Aage; Heinrich, Joachim; Henderson, Brian E; Hicks, Pamela J; Hocking, Lynne J; Kolonel, Laurence N; Landi, Maria Teresa; Langefeld, Carl D; Le Marchand, Loic; Meister, Michael; Morgan, Ann W; Raji, Olaide Y; Risch, Angela; Rosenberger, Albert; Scherf, David; Steer, Sophia; Walshaw, Martin; Waters, Kevin M; Wilson, Anthony G; Wordsworth, Paul; Zienolddiny, Shanbeh; Tchetgen, Eric Tchetgen; Haiman, Christopher; Hunter, David J; Plenge, Robert M; Worthington, Jane; Christiani, David C; Schaumberg, Debra A; Chasman, Daniel I; Altshuler, David; Voight, Benjamin; Kraft, Peter; Patterson, Nick; Price, Alkes L
2012-01-01
Genetic case-control association studies often include data on clinical covariates, such as body mass index (BMI), smoking status, or age, that may modify the underlying genetic risk of case or control samples. For example, in type 2 diabetes, odds ratios for established variants estimated from low-BMI cases are larger than those estimated from high-BMI cases. An unanswered question is how to use this information to maximize statistical power in case-control studies that ascertain individuals on the basis of phenotype (case-control ascertainment) or phenotype and clinical covariates (case-control-covariate ascertainment). While current approaches improve power in studies with random ascertainment, they often lose power under case-control ascertainment and fail to capture available power increases under case-control-covariate ascertainment. We show that an informed conditioning approach, based on the liability threshold model with parameters informed by external epidemiological information, fully accounts for disease prevalence and non-random ascertainment of phenotype as well as covariates and provides a substantial increase in power while maintaining a properly controlled false-positive rate. Our method outperforms standard case-control association tests with or without covariates, tests of gene x covariate interaction, and previously proposed tests for dealing with covariates in ascertained data, with especially large improvements in the case of case-control-covariate ascertainment. We investigate empirical case-control studies of type 2 diabetes, prostate cancer, lung cancer, breast cancer, rheumatoid arthritis, age-related macular degeneration, and end-stage kidney disease over a total of 89,726 samples. In these datasets, informed conditioning outperforms logistic regression for 115 of the 157 known associated variants investigated (P-value = 1 × 10(-9)). The improvement varied across diseases with a 16% median increase in χ(2) test statistics and a
Estimating Missing Features to Improve Multimedia Information Retrieval
Bagherjeiran, A; Love, N S; Kamath, C
2006-09-28
Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.
Electronic pharmacopoeia: a missed opportunity for safe opioid prescribing information?
Lapoint, Jeff; Perrone, Jeanmarie; Nelson, Lewis S
2014-03-01
Errors in prescribing of dangerous medications, such as extended release or long acting (ER/LA) opioid forlmulations, remain an important cause of patient harm. Prescribing errors often relate to the failure to note warnings regarding contraindications and drug interactions. Many prescribers utilize electronic pharmacopoeia (EP) to improve medication ordering. The purpose of this study is to assess the ability of commonly used apps to provide accurate safety information about the boxed warning for ER/LA opioids. We evaluated a convenience sample of six popular EP apps available for the iPhone and an online reference for the presence of relevant safety warnings. We accessed the dosing information for each of six ER/LA medications and assessed for the presence of an easily identifiable indication that a boxed warning was present, even if the warning itself was not provided. The prominence of precautionary drug information presented to the user was assessed for each app. Provided information was classified based on the presence of the warning in the ordering pathway, located separately but within the prescribers view, or available in a separate screen of the drug information but non-highlighted. Each program provided a consistent level of warning information for each of the six ER/LA medications. Only 2/7 programs placed a warning in line with dosing information (level 1); 3/7 programs offered level 2 warning and 1/7 offered level 3 warning. One program made no mention of a boxed warning. Most EP apps isolate important safety warnings, and this represents a missed opportunity to improve prescribing practices. PMID:24081616
Mazza, Gina L.; Enders, Craig K.; Ruehlman, Linda S.
2015-01-01
Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002; Graham, 2009; Enders, 2010). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe an FIML approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249
Jaffa, Miran A.; Jaffa, Ayad A; Lipsitz, Stuart R.
2015-01-01
A new statistical model is proposed to estimate population and individual slopes that are adjusted for covariates and informative right censoring. Individual slopes are assumed to have a mean that depends on the population slope for the covariates. The number of observations for each individual is modeled as a truncated discrete distribution with mean dependent on the individual subjects' slopes. Our simulation study results indicated that the associated bias and mean squared errors for the proposed model were comparable to those associated with the model that only adjusts for informative right censoring. The proposed model was illustrated using renal transplant dataset to estimate population slopes for covariates that could impact the outcome of renal function following renal transplantation. PMID:25729124
2014-01-01
Background Several methods are available for the detection of covarying positions from a multiple sequence alignment (MSA). If the MSA contains a large number of sequences, information about the proximities between residues derived from covariation maps can be sufficient to predict a protein fold. However, in many cases the structure is already known, and information on the covarying positions can be valuable to understand the protein mechanism and dynamic properties. Results In this study we have sought to determine whether a multivariate (multidimensional) extension of traditional mutual information (MI) can be an additional tool to study covariation. The performance of two multidimensional MI (mdMI) methods, designed to remove the effect of ternary/quaternary interdependencies, was tested with a set of 9 MSAs each containing <400 sequences, and was shown to be comparable to that of the newest methods based on maximum entropy/pseudolikelyhood statistical models of protein sequences. However, while all the methods tested detected a similar number of covarying pairs among the residues separated by < 8 Å in the reference X-ray structures, there was on average less than 65% overlap between the top scoring pairs detected by methods that are based on different principles. Conclusions Given the large variety of structure and evolutionary history of different proteins it is possible that a single best method to detect covariation in all proteins does not exist, and that for each protein family the best information can be derived by merging/comparing results obtained with different methods. This approach may be particularly valuable in those cases in which the size of the MSA is small or the quality of the alignment is low, leading to significant differences in the pairs detected by different methods. PMID:24886131
NASA Astrophysics Data System (ADS)
Horvath, Alexander; Murböck, Michael; Pail, Roland; Horwath, Martin
2016-04-01
Aiming for an as accurate as possible estimation of mass trends in Antarctica or other regions, based on global GRACE gravity field solutions, calls for best possible post processing strategies. Decorrelation filters employing static covariance information have already been developed in the past (e.g. DDK filter series by Jürgen Kusche 2007 & 2009), but covariance information for a decade long recent time series was (except for the ITG-GRACE2010 series) not publicly available since the publication of the ITSG temporal gravity field model in October 2014. With this work we aim to use this time series with its evolving correlation structures due to changing mission configuration (e.g. orbital height) and instrument characteristics over time. Proper reduction of correlated errors is a crucial step towards trend estimation. For this purpose we analyzed the existing series of DDK filters based on static or simplified assumptions on the correlation structure of spherical harmonic coefficients and target signals. To analyze the potential gain using month to month full covariance information we have tested the impact of certain simplifications (e.g. the ones applied for the DDK filters) with respect to the full covariance information in a closed loop simulator. Based on the outcome of the simulated results we computed new time variable decorrelation (VADER) filters using full error covariance information and investigated the impact on basin scale mass change estimates in the Antarctic region. The work presented includes a comprehensive assessment of the filter performance, accompanied by an intercomparison of the mass change estimates based on the VADER filter solutions against the ones obtained from DDK, Swenson & Wahr type and other filters as well as independently derived results from e.g. radar altimetry.
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-01-01
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-01-01
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980
Three schemes of remote information concentration based on ancilla-free phase-covariant telecloning
NASA Astrophysics Data System (ADS)
Bai, Ming-qiang; Peng, Jia-Yin; Mo, Zhi-Wen
2014-05-01
In this paper, remote information concentration is investigated which is the reverse process of the optimal asymmetric economical phase-covariant telecloning (OAEPCT). The OAEPCT is different from the reverse process of optimal universal telecloning. It is shown that the quantum information via OAEPCT procedure can be remotely concentrated back to a single qubit with a certain probability via several quantum channels. In these schemes, we adopt Bell measurement to measure the joint systems and use projected measurement and positive operator-valued measure to recover the original quantum state. The results shows non-maximally entangled quantum resource can be applied to information concentration.
a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information
NASA Astrophysics Data System (ADS)
Lian, Shizhong; Chen, Jiangping; Luo, Minghai
2016-06-01
Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.
Information Literacy: The Missing Link in Early Childhood Education
ERIC Educational Resources Information Center
Heider, Kelly L.
2009-01-01
The rapid growth of information over the last 30 or 40 years has made it impossible for educators to prepare students for the future without teaching them how to be effective information managers. The American Library Association refers to those students who manage information effectively as "information literate." Information literacy instruction…
Pan, Qiyuan; Wei, Rong
2016-01-01
In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)−1/2, where m is the number of imputations, leading to the conclusion that a small m (≤5) would be sufficient for MI. However, evidence has been accumulating that many more imputations are needed. Why would the apparently sufficient m deduced from the RE be actually too small? The answer may lie with γ. In this research, γ was determined at the fractions of missing data (δ) of 4%, 10%, 20%, and 29% using the 2012 Physician Workflow Mail Survey of the National Ambulatory Medical Care Survey (NAMCS). The γ values were strikingly small, ranging in the order of 10−6 to 0.01. As δ increased, γ usually increased but sometimes decreased. How the data were analysed had the dominating effects on γ, overshadowing the effect of δ. The results suggest that it is impossible to predict γ using δ and that it may not be appropriate to use the γ-based RE to determine sufficient m.
Open Informational Ecosystems: The Missing Link for Sharing Educational Resources
ERIC Educational Resources Information Center
Kerres, Michael; Heinen, Richard
2015-01-01
Open educational resources are not available "as such". Their provision relies on a technological infrastructure of related services that can be described as an informational ecosystem. A closed informational ecosystem keeps educational resources within its boundary. An open informational ecosystem relies on the concurrence of…
Exchanging Missing Information in Tasks: Old and New Interpretations
ERIC Educational Resources Information Center
Jenks, Christopher Joseph
2009-01-01
Information gap tasks have played a key role in applied linguistics (Pica, 2005). For example, extensive research has been conducted using information gap tasks to elicit second language data. Yet, despite their prominent role in research and pedagogy, there is still much to be investigated with regard to what information gap tasks offer research…
Poenitz, W.P.; Peelle, R.W.
1986-11-17
A straightforward derivation is presented for the covariance matrix of evaluated cross sections based on the covariance matrix of the experimental data and propagation through nuclear model parameters. 10 refs.
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. PMID:27176912
Modeling Achievement Trajectories when Attrition Is Informative
ERIC Educational Resources Information Center
Feldman, Betsy J.; Rabe-Hesketh, Sophia
2012-01-01
In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…
The Effect of "Missing" Information on Children's Retention of Fast-Mapped Labels
ERIC Educational Resources Information Center
Wilkinson, Krista M.; Mazzitelli, Kim
2003-01-01
This paper explores "fast mapping", one of several processes that have been proposed to be involved in the rapid vocabulary expansion observed in the preschool years. An adaptation of a receptive word matching task examined how well children retained a just-mapped relation between word and referent when some information was later missing.…
The Effect of "Missing" Information on Children's Retention of Fast-Mapped Labels.
ERIC Educational Resources Information Center
Wilkinson, Krista M.; Mazzitelli, Kim
2003-01-01
Explores "fast mapping," one of several processes that have been proposed to be involved in the rapid vocabulary expansion observed in the preschool years. An adaptation of a receptive word matching task examined how well children retained just-mapped relation between word and referent when some information was later missing. (Author/VWL)
Relying on Your Own Best Judgment: Imputing Values to Missing Information in Decision Making.
ERIC Educational Resources Information Center
Johnson, Richard D.; And Others
Processes involved in making estimates of the value of missing information that could help in a decision making process were studied. Hypothetical purchases of ground beef were selected for the study as such purchases have the desirable property of quantifying both the price and quality. A total of 150 students at the University of Iowa rated the…
Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random
ERIC Educational Resources Information Center
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David
2013-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…
Individual Information-Centered Approach for Handling Physical Activity Missing Data
ERIC Educational Resources Information Center
Kang, Minsoo; Rowe, David A.; Barreira, Tiago V.; Robinson, Terrance S.; Mahar, Matthew T.
2009-01-01
The purpose of this study was to validate individual information (II)-centered methods for handling missing data, using data samples of 118 middle-aged adults and 91 older adults equipped with Yamax SW-200 pedometers and Actigraph accelerometers for 7 days. We used a semisimulation approach to create six data sets: three physical activity outcome…
NASA Astrophysics Data System (ADS)
Zhang, Daxiang; Zhang, Chuanrong; Li, Weidong; Cromley, Robert; Hanink, Dean; Civco, Daniel; Travis, David
2014-01-01
Although removing the pixels covered by contrails and their shadows and restoring the missing information at the locations in remotely sensed imagery are important to understand contrails' effects on climate change, there are no such studies in the current literature. This study investigates the restoration of the missing information of the pixels caused by contrails in multispectral remotely sensed Landsat 5 TM imagery using a cokriging approach. Interpolation results and several validation methods show that it is practical to use the cokriging approach to restore the contrail-covered pixels in the multispectral remotely sensed imagery. Compared to ordinary kriging, the results are improved by taking advantage of both the spatial information in the original imagery and information from the secondary imagery.
Code of Federal Regulations, 2014 CFR
2014-04-01
... the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...
Code of Federal Regulations, 2013 CFR
2013-04-01
... the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...
Code of Federal Regulations, 2012 CFR
2012-04-01
... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...
Code of Federal Regulations, 2010 CFR
2010-04-01
... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...
ICON: 3D reconstruction with 'missing-information' restoration in biological electron tomography.
Deng, Yuchen; Chen, Yu; Zhang, Yan; Wang, Shengliu; Zhang, Fa; Sun, Fei
2016-07-01
Electron tomography (ET) plays an important role in revealing biological structures, ranging from macromolecular to subcellular scale. Due to limited tilt angles, ET reconstruction always suffers from the 'missing wedge' artifacts, thus severely weakens the further biological interpretation. In this work, we developed an algorithm called Iterative Compressed-sensing Optimized Non-uniform fast Fourier transform reconstruction (ICON) based on the theory of compressed-sensing and the assumption of sparsity of biological specimens. ICON can significantly restore the missing information in comparison with other reconstruction algorithms. More importantly, we used the leave-one-out method to verify the validity of restored information for both simulated and experimental data. The significant improvement in sub-tomogram averaging by ICON indicates its great potential in the future application of high-resolution structural determination of macromolecules in situ. PMID:27079261
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
NASA Astrophysics Data System (ADS)
Bernardini, A. E.; Bertolami, O.
2013-07-01
In this work we examine the effect of phase-space noncommutativity on some typically quantum properties such as quantum beating, quantum information, and decoherence. To exemplify these issues we consider the two-dimensional noncommutative quantum harmonic oscillator whose component behavior we monitor in time. This procedure allows us to determine how the noncommutative parameters are related to the missing information quantified by the linear quantum entropy and by the mutual information between the relevant Hilbert space coordinates. Particular questions concerning the thermodynamic limit of some relevant properties are also discussed in order to evidence the effects of noncommutativity. Finally, through an analogy with the Zeeman effect, we identify how some aspects of the axial symmetry of the problem suggest the possibility of decoupling the noncommutative quantum perturbations from unperturbed commutative well-known solutions.
Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models
ERIC Educational Resources Information Center
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent
2015-01-01
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…
ERIC Educational Resources Information Center
Riley, Bernice
This script, with music, lyrics and dialog, was written especially for youngsters to inform them of the potential dangers of various drugs. The author, who teaches in an elementary school in Harlem, New York, offers Miss Heroin as her answer to the expressed opinion that most drug and alcohol information available is either too simplified and…
You, Jinhong; Zhou, Haibo
2009-01-01
We consider statistical inference on a regression model in which some covariables are measured with errors together with an auxiliary variable. The proposed estimation for the regression coefficients is based on some estimating equations. This new method alleates some drawbacks of previously proposed estimations. This includes the requirment of undersmoothing the regressor functions over the auxiliary variable, the restriction on other covariables which can be observed exactly, among others. The large sample properties of the proposed estimator are established. We further propose a jackknife estimation, which consists of deleting one estimating equation (instead of one obervation) at a time. We show that the jackknife estimator of the regression coefficients and the estimating equations based estimator are asymptotically equivalent. Simulations show that the jackknife estimator has smaller biases when sample size is small or moderate. In addition, the jackknife estimation can also provide a consistent estimator of the asymptotic covariance matrix, which is robust to the heteroscedasticity. We illustrate these methods by applying them to a real data set from marketing science. PMID:22199460
NASA Astrophysics Data System (ADS)
Kisil, Vladimir V.
2011-03-01
Dedicated to the memory of Cora Sadosky The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H2, Banach spaces, covariant functional calculus and many others.
Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia
2015-04-01
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion. PMID:25345922
Lian, Lian; de los Campos, Gustavo
2015-01-01
The Finlay–Wilkinson regression (FW) is a popular method among plant breeders to describe genotype by environment interaction. The standard implementation is a two-step procedure that uses environment (sample) means as covariates in a within-line ordinary least squares (OLS) regression. This procedure can be suboptimal for at least four reasons: (1) in the first step environmental means are typically estimated without considering genetic-by-environment interactions, (2) in the second step uncertainty about the environmental means is ignored, (3) estimation is performed regarding lines and environment as fixed effects, and (4) the procedure does not incorporate genetic (either pedigree-derived or marker-derived) relationships. Su et al. proposed to address these problems using a Bayesian method that allows simultaneous estimation of environmental and genotype parameters, and allows incorporation of pedigree information. In this article we: (1) extend the model presented by Su et al. to allow integration of genomic information [e.g., single nucleotide polymorphism (SNP)] and covariance between environments, (2) present an R package (FW) that implements these methods, and (3) illustrate the use of the package using examples based on real data. The FW R package implements both the two-step OLS method and a full Bayesian approach for Finlay–Wilkinson regression with a very simple interface. Using a real wheat data set we demonstrate that the prediction accuracy of the Bayesian approach is consistently higher than the one achieved by the two-step OLS method. PMID:26715095
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Beckie, Roger Daniel
2014-05-01
Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were
Nakamura-Pereira, Marcos; Mendes-Silva, Wallace; Dias, Marcos Augusto Bastos; Reichenheim, Michael E; Lobato, Gustavo
2013-07-01
This study aimed to investigate the performance of the Hospital Information System of the Brazilian Unified National Health System (SIH-SUS) in identifying cases of maternal near miss in a hospital in Rio de Janeiro, Brazil, in 2008. Cases were identified by reviewing medical records of pregnant and postpartum women admitted to the hospital. The search for potential near miss events in the SIH-SUS database relied on a list of procedures and codes from the International Classification of Diseases, 10th revision (ICD-10) that were consistent with this diagnosis. The patient chart review identified 27 cases, while 70 potential occurrences of near miss were detected in the SIH-SUS database. However, only 5 of 70 were "true cases" of near miss according to the chart review, which corresponds to a sensitivity of 18.5% (95%CI: 6.3-38.1), specificity of 94.3% (95%CI: 92.8-95.6), area under the ROC of 0.56 (95%CI: 0.48-0.63), and positive predictive value of 10.1% (IC95%: 4.7-20.3). These findings suggest that SIH-SUS does not appear appropriate for monitoring maternal near miss. PMID:23843001
Serebruany, Victor; Tomek, Ales
2016-07-15
PEGASUS trial reported reduction of composite primary endpoint after conventional 180mg/daily ticagrelor (CT), and lower 120mg/daily dose ticagrelor (LT) at expense of extra bleeding. Following approval of CT and LT for long-term secondary prevention indication, recent FDA review verified some bleeding outcomes in PEGASUS. To compare the risks after CT and LT against placebo by seven TIMI scale variables, and 9 bleeding categories considered as serious adverse events (SAE) in light of PEGASUS drug discontinuation rates (DDR). The DDR in all PEGASUS arms was high reaching astronomical 32% for CT. The distribution of some outcomes (TIMI major, trauma, epistaxis, iron deficiency, hemoptysis, and anemia) was reasonable. However, the TIMI minor events were heavily underreported when compared to similar trials. Other bleedings (intracranial, spontaneous, hematuria, and gastrointestinal) appear sporadic, lacking expected dose-dependent impact of CT and LT. Few SAE outcomes (fatal, ecchymosis, hematoma, bruises, bleeding) paradoxically reported more bleeding after LT than after CT. Many bleeding outcomes were probably missed in PEGASUS potentially due to massive non-compliance, information censoring, or both. The FDA must improve reporting of trial outcomes especially in the sponsor-controlled environment when DDR and incomplete follow-up rates are high. PMID:27128533
Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed.
Ehinger, Krista A; Allen, Kala; Wolfe, Jeremy M
2016-05-01
Previous work has shown that human observers discount or neglect cast shadows in natural and artificial scenes across a range of visual tasks. This is a reasonable strategy for a visual system designed to recognize objects under a range of lighting conditions, since cast shadows are not intrinsic properties of the scene-they look different (or disappear entirely) under different lighting conditions. However, cast shadows can convey useful information about the three-dimensional shapes of objects and their spatial relations. In this study, we investigated how well people detect changes to cast shadows, presented in natural scenes in a change blindness paradigm, and whether shadow changes that imply the movement or disappearance of an object are more easily noticed than shadow changes that imply a change in lighting. In Experiment 1, a critical object's shadow was removed, rotated to another direction, or shifted down to suggest that the object was floating. All of these shadow changes were noticed less often than changes to physical objects or surfaces in the scene, and there was no difference in the detection rates for the three types of changes. In Experiment 2, the shadows of visible or occluded objects were removed from the scenes. Although removing the cast shadow of an occluded object could be seen as an object deletion, both types of shadow changes were noticed less often than deletions of the visible, physical objects in the scene. These results show that even informative shadow changes are missed, suggesting that cast shadows are discounted fairly early in the processing of natural scenes. PMID:26846753
Estimating model and observation error covariance information for land data assimilation systems
Technology Transfer Automated Retrieval System (TEKTRAN)
In order to operate efficiently, data assimilation systems require accurate assumptions concerning the statistical magnitude and cross-correlation structure of error in model forecasts and assimilated observations. Such information is seldom available for the operational implementation of land data ...
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Predicting New Hampshire Indoor Radon Concentrations from geologic information and other covariates
Apte, M.G.; Price, P.N.; Nero, A.V.; Revzan, K.L.
1998-05-01
Generalized geologic province information and data on house construction were used to predict indoor radon concentrations in New Hampshire (NH). A mixed-effects regression model was used to predict the geometric mean (GM) short-term radon concentrations in 259 NH towns. Bayesian methods were used to avoid over-fitting and to minimize the effects of small sample variation within towns. Data from a random survey of short-term radon measurements, individual residence building characteristics, along with geologic unit information, and average surface radium concentration by town, were variables used in the model. Predicted town GM short-term indoor radon concentrations for detached houses with usable basements range from 34 Bq/m{sup 3} (1 pCi/l) to 558 Bq/m{sup 3} (15 pCi/l), with uncertainties of about 30%. A geologic province consisting of glacial deposits and marine sediments, was associated with significantly elevated radon levels, after adjustment for radium concentration, and building type. Validation and interpretation of results are discussed.
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. PMID:25707010
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
Predicting top-L missing links with node and link clustering information in large-scale networks
NASA Astrophysics Data System (ADS)
Wu, Zhihao; Lin, Youfang; Wan, Huaiyu; Jamil, Waleed
2016-08-01
Networks are mathematical structures that are universally used to describe a large variety of complex systems, such as social, biological, and technological systems. The prediction of missing links in incomplete complex networks aims to estimate the likelihood of the existence of a link between a pair of nodes. Various topological features of networks have been applied to develop link prediction methods. However, the exploration of features of links is still limited. In this paper, we demonstrate the power of node and link clustering information in predicting top -L missing links. In the existing literature, link prediction algorithms have only been tested on small-scale and middle-scale networks. The network scale factor has not attracted the same level of attention. In our experiments, we test the proposed method on three groups of networks. For small-scale networks, since the structures are not very complex, advanced methods cannot perform significantly better than classical methods. For middle-scale networks, the proposed index, combining both node and link clustering information, starts to demonstrate its advantages. In many networks, combining both node and link clustering information can improve the link prediction accuracy a great deal. Large-scale networks with more than 100 000 links have rarely been tested previously. Our experiments on three large-scale networks show that local clustering information based methods outperform other methods, and link clustering information can further improve the accuracy of node clustering information based methods, in particular for networks with a broad distribution of the link clustering coefficient.
Liu, Haigang; Xu, Zijian; Zhang, Xiangzhi; Wu, Yanqing; Guo, Zhi; Tai, Renzhong
2013-04-10
In coherent diffractive imaging (CDI) experiments, a beamstop (BS) is commonly used to extend the exposure time of the charge-coupled detector and obtain high-angle diffraction signals. However, the negative effect of a large BS is also evident, causing low-frequency signals to be missed and making CDI reconstruction unstable or causing it to fail. We performed a systematic simulation investigation of the effects of BSs on the quality of reconstructed images from both plane-wave and ptychographic CDI (PCDI). For the same imaging quality, we found that ptychography can tolerate BSs that are at least 20 times larger than those for plane-wave CDI. For PCDI, a larger overlap ratio and a smaller illumination spot can significantly increase the imaging robustness to the negative influence of BSs. Our results provide guidelines for the usage of BSs in CDI, especially in PCDI experiments, which can help to further improve the spatial resolution of PCDI. PMID:23670772
NASA Astrophysics Data System (ADS)
Kempf, A.; Chatwin-Davies, A.; Martin, R. T. W.
2013-02-01
While a natural ultraviolet cutoff, presumably at the Planck length, is widely assumed to exist in nature, it is nontrivial to implement a minimum length scale covariantly. This is because the presence of a fixed minimum length needs to be reconciled with the ability of Lorentz transformations to contract lengths. In this paper, we implement a fully covariant Planck scale cutoff by cutting off the spectrum of the d'Alembertian. In this scenario, consistent with Lorentz contractions, wavelengths that are arbitrarily smaller than the Planck length continue to exist. However, the dynamics of modes of wavelengths that are significantly smaller than the Planck length possess a very small bandwidth. This has the effect of freezing the dynamics of such modes. While both wavelengths and bandwidths are frame dependent, Lorentz contraction and time dilation conspire to make the freezing of modes of trans-Planckian wavelengths covariant. In particular, we show that this ultraviolet cutoff can be implemented covariantly also in curved spacetimes. We focus on Friedmann Robertson Walker spacetimes and their much-discussed trans-Planckian question: The physical wavelength of each comoving mode was smaller than the Planck scale at sufficiently early times. What was the mode's dynamics then? Here, we show that in the presence of the covariant UV cutoff, the dynamical bandwidth of a comoving mode is essentially zero up until its physical wavelength starts exceeding the Planck length. In particular, we show that under general assumptions, the number of dynamical degrees of freedom of each comoving mode all the way up to some arbitrary finite time is actually finite. Our results also open the way to calculating the impact of this natural UV cutoff on inflationary predictions for the cosmic microwave background.
ERIC Educational Resources Information Center
Acock, Alan C.
2005-01-01
Less than optimum strategies for missing values can produce biased estimates, distorted statistical power, and invalid conclusions. After reviewing traditional approaches (listwise, pairwise, and mean substitution), selected alternatives are covered including single imputation, multiple imputation, and full information maximum likelihood…
Graber-Naidich, Anna; Malone, Kathleen E.; Hsu, Li
2011-01-01
Case-control family data are now widely used to examine the role of gene-environment interactions in the etiology of complex diseases. In these types of studies, exposure levels are obtained retrospectively and, frequently, information on most risk factors of interest is available on the probands but not on their relatives. In this work we consider correlated failure time data arising from population-based case-control family studies with missing genotypes of relatives. We present a new method for estimating the age-dependent marginalized hazard function. The proposed technique has two major advantages: (1) it is based on the pseudo full likelihood function rather than a pseudo composite likelihood function, which usually suffers from substantial efficiency loss; (2) the cumulative baseline hazard function is estimated using a two-stage estimator instead of an iterative process. We assess the performance of the proposed methodology with simulation studies, and illustrate its utility on a real data example. PMID:21153764
Blinder, Victoria S
2014-01-01
A phase III study comparing capecitabine monotherapy to combination treatment with capecitabine and sunitinib in patients with metastatic breast cancer failed to demonstrate a benefit in terms of progression-free or overall survival. Both regimens were reasonably well tolerated with some differences noted in the specific toxicity profiles. However, the study failed to incorporate an assessment of patient-reported outcomes (PROs) such as self-reported pain, quality of life, or employment outcomes. This is a missed opportunity. If more clinical trials included such measures, they would provide valuable information to patients and clinicians choosing from a wide array of available and otherwise similarly effective systemic therapies for metastatic breast cancer. PMID:25841482
Slide Presentations as Speech Suppressors: When and Why Learners Miss Oral Information
ERIC Educational Resources Information Center
Wecker, Christof
2012-01-01
The objective of this study was to test whether information presented on slides during presentations is retained at the expense of information presented only orally, and to investigate part of the conditions under which this effect occurs, and how it can be avoided. Such an effect could be expected and explained either as a kind of redundancy…
NASA Astrophysics Data System (ADS)
2008-08-01
The Institute of Physics is seeking short summaries of geophysics topics to support school teachers, in a move aimed at boosting the teaching and awareness of geophysics in schools. The UK government Department of Innovation, Universities, Science and Skills is seeking information on geo-engineering as a case study within its major enquiry into engineering. The field described by the DUISS is broad and covers areas in which geophysicists may be working and in a position to supply useful information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 23 Highways 1 2012-04-01 2012-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 23 Highways 1 2013-04-01 2013-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997) B Appendix B to Part 1240 Highways NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION AND FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF...
van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd
2016-05-01
Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both. PMID:27198876
ERIC Educational Resources Information Center
Tryon, Warren W.
2009-01-01
The first recommendation Kazdin made for advancing the psychotherapy research knowledge base, improving patient care, and reducing the gulf between research and practice was to study the mechanisms of therapeutic change. He noted, "The study of mechanisms of change has received the least attention even though understanding mechanisms may well be…
Lemaître, Sophie; Collier, Francis; Hulin, Vincent
2009-12-20
The aim of the study is to evaluate the utility of the website http://www.g-oubliemapilule.com/ that contains the recommendations of the French Haute Autorité de santé in case of oral contraceptive pill missing. This epidemiologic prospective study was conducted using an online questionnaire available at http://www.g-oubliemapilule.com/. The results emphasize the poor quality of information provided by the physicians. 40% of the physicians don't provide information about what to do in case of oral contraceptive pill missing during the first medical visit for oral contraceptive prescription and the physicians don't inquire about oral contraceptive pill missing during the follow-up in 3/4 of the cases. Furthermore, when women find information about what to do in case of oral contraceptive pill missing, a majority of them won't follow the advice provided even if it is fully understood. 60% of the women who should use the condom during the 7 days following the oral contraceptive pill missing don't use it and 86% of the women who should use the emergency contraceptive pill don't use it. The reason mostly invoked (1/3 of the cases) to support that behaviour is the assumption that the risk of pregnancy is too low. The results help to understand the gap between theoretical efficacy (Pearl Index: 0.3%) and real efficacy (8%) of the oral contraceptive pill. Finally, the website http://www.g-oubliemapilule.com/ is a useful well understood additional tool but can't replace the medical follow-up. PMID:20085215
OPAC Missing Record Retrieval.
ERIC Educational Resources Information Center
Johnson, Karl E.
1996-01-01
When the Higher Education Library Information Network of Rhode Island transferred members' bibliographic data into a shared online public access catalog (OPAC), 10% of the University of Rhode Island's monograph records were missing. This article describes the consortium's attempts to retrieve records from the database and the effectiveness of…
A class of covariate-dependent spatiotemporal covariance functions
Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.
2014-01-01
In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199
'Miss Frances', 'Miss Gail' and 'Miss Sandra' Crapemyrtles
Technology Transfer Automated Retrieval System (TEKTRAN)
The Agricultural Research Service, United States Department of Agriculture, announces the release to nurserymen of three new crapemyrtle cultivars named 'Miss Gail', 'Miss Frances', and 'Miss Sandra'. ‘Miss Gail’ resulted from a cross-pollination between ‘Catawba’ as the female parent and ‘Arapaho’ ...
Help for Finding Missing Children.
ERIC Educational Resources Information Center
McCormick, Kathleen
1984-01-01
Efforts to locate missing children have expanded from a federal law allowing for entry of information into an F.B.I. computer system to companion bills before Congress for establishing a national missing child clearinghouse and a Justice Department center to help in conducting searches. Private organizations are also involved. (KS)
2011-01-01
Background Nowadays, more and more clinical scales consisting in responses given by the patients to some items (Patient Reported Outcomes - PRO), are validated with models based on Item Response Theory, and more specifically, with a Rasch model. In the validation sample, presence of missing data is frequent. The aim of this paper is to compare sixteen methods for handling the missing data (mainly based on simple imputation) in the context of psychometric validation of PRO by a Rasch model. The main indexes used for validation by a Rasch model are compared. Methods A simulation study was performed allowing to consider several cases, notably the possibility for the missing values to be informative or not and the rate of missing data. Results Several imputations methods produce bias on psychometrical indexes (generally, the imputation methods artificially improve the psychometric qualities of the scale). In particular, this is the case with the method based on the Personal Mean Score (PMS) which is the most commonly used imputation method in practice. Conclusions Several imputation methods should be avoided, in particular PMS imputation. From a general point of view, it is important to use an imputation method that considers both the ability of the patient (measured for example by his/her score), and the difficulty of the item (measured for example by its rate of favourable responses). Another recommendation is to always consider the addition of a random process in the imputation method, because such a process allows reducing the bias. Last, the analysis realized without imputation of the missing data (available case analyses) is an interesting alternative to the simple imputation in this context. PMID:21756330
NASA Technical Reports Server (NTRS)
Hepner, T. E.; Meyers, J. F. (Inventor)
1985-01-01
A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.
Bartolucci, Francesco; Farcomeni, Alessio
2015-03-01
Mixed latent Markov (MLM) models represent an important tool of analysis of longitudinal data when response variables are affected by time-fixed and time-varying unobserved heterogeneity, in which the latter is accounted for by a hidden Markov chain. In order to avoid bias when using a model of this type in the presence of informative drop-out, we propose an event-history (EH) extension of the latent Markov approach that may be used with multivariate longitudinal data, in which one or more outcomes of a different nature are observed at each time occasion. The EH component of the resulting model is referred to the interval-censored drop-out, and bias in MLM modeling is avoided by correlated random effects, included in the different model components, which follow common latent distributions. In order to perform maximum likelihood estimation of the proposed model by the expectation-maximization algorithm, we extend the usual forward-backward recursions of Baum and Welch. The algorithm has the same complexity as the one adopted in cases of non-informative drop-out. We illustrate the proposed approach through simulations and an application based on data coming from a medical study about primary biliary cirrhosis in which there are two outcomes of interest, one continuous and the other binary. PMID:25227970
Galilean covariant harmonic oscillator
NASA Technical Reports Server (NTRS)
Horzela, Andrzej; Kapuscik, Edward
1993-01-01
A Galilean covariant approach to classical mechanics of a single particle is described. Within the proposed formalism, all non-covariant force laws defining acting forces which become to be defined covariantly by some differential equations are rejected. Such an approach leads out of the standard classical mechanics and gives an example of non-Newtonian mechanics. It is shown that the exactly solvable linear system of differential equations defining forces contains the Galilean covariant description of harmonic oscillator as its particular case. Additionally, it is demonstrated that in Galilean covariant classical mechanics the validity of the second Newton law of dynamics implies the Hooke law and vice versa. It is shown that the kinetic and total energies transform differently with respect to the Galilean transformations.
Schenkel, Flávio S; Schaeffer, Lawrence R; Boettcher, Paul J
2002-01-01
Bayesian (via Gibbs sampling) and empirical BLUP (EBLUP) estimation of fixed effects and breeding values were compared by simulation. Combinations of two simulation models (with or without effect of contemporary group (CG)), three selection schemes (random, phenotypic and BLUP selection), two levels of heritability (0.20 and 0.50) and two levels of pedigree information (0% and 15% randomly missing) were considered. Populations consisted of 450 animals spread over six discrete generations. An infinitesimal additive genetic animal model was assumed while simulating data. EBLUP and Bayesian estimates of CG effects and breeding values were, in all situations, essentially the same with respect to Spearman's rank correlation between true and estimated values. Bias and mean square error (MSE) of EBLUP and Bayesian estimates of CG effects and breeding values showed the same pattern over the range of simulated scenarios. Methods were not biased by phenotypic and BLUP selection when pedigree information was complete, albeit MSE of estimated breeding values increased for situations where CG effects were present. Estimation of breeding values by Bayesian and EBLUP was similarly affected by joint effect of phenotypic or BLUP selection and randomly missing pedigree information. For both methods, bias and MSE of estimated breeding values and CG effects substantially increased across generations. PMID:11929624
Covariant mutually unbiased bases
NASA Astrophysics Data System (ADS)
Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro
2016-06-01
The connection between maximal sets of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space and finite phase-space geometries is well known. In this article, we classify MUBs according to their degree of covariance with respect to the natural symmetries of a finite phase-space, which are the group of its affine symplectic transformations. We prove that there exist maximal sets of MUBs that are covariant with respect to the full group only in odd prime-power dimensional spaces, and in this case, their equivalence class is actually unique. Despite this limitation, we show that in dimension 2r covariance can still be achieved by restricting to proper subgroups of the symplectic group, that constitute the finite analogues of the oscillator group. For these subgroups, we explicitly construct the unitary operators yielding the covariance.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do
A hierarchical nest survival model integrating incomplete temporally varying covariates
Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A
2013-01-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
A hierarchical nest survival model integrating incomplete temporally varying covariates
Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.
2013-01-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
A hierarchical nest survival model integrating incomplete temporally varying covariates.
Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A
2013-11-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
The covariate-adjusted frequency plot.
Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K
2016-04-01
Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter. PMID:23376964
Addressing spectroscopic quality of covariant density functional theory
NASA Astrophysics Data System (ADS)
Afanasjev, A. V.
2015-03-01
The spectroscopic quality of covariant density functional theory has been accessed by analyzing the accuracy and theoretical uncertainties in the description of spectroscopic observables. Such analysis is first presented for the energies of the single-particle states in spherical and deformed nuclei. It is also shown that the inclusion of particle-vibration coupling improves the description of the energies of predominantly single-particle states in medium and heavy-mass spherical nuclei. However, the remaining differences between theory and experiment clearly indicate missing physics and missing terms in covariant energy density functionals. The uncertainties in the predictions of the position of two-neutron drip line sensitively depend on the uncertainties in the prediction of the energies of the single-particle states. On the other hand, many spectroscopic observables in well deformed nuclei at ground state and finite spin only weakly depend on the choice of covariant energy density functional.
Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates
ERIC Educational Resources Information Center
Lockwood, J. R.; McCaffrey, Daniel F.
2015-01-01
Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…
NASA Astrophysics Data System (ADS)
Frasinski, Leszek J.
2016-08-01
Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.
Missing persons-missing data: the need to collect antemortem dental records of missing persons.
Blau, Soren; Hill, Anthony; Briggs, Christopher A; Cordner, Stephen M
2006-03-01
incorporated into the National Coroners Information System (NCIS) managed, on behalf of Australia's Coroners, by the Victorian Institute of Forensic Medicine. The existence of the NCIS would ensure operational collaboration in the implementation of the system and cost savings to Australian policing agencies involved in missing person inquiries. The implementation of such a database would facilitate timely and efficient reconciliation of clinical and postmortem dental records and have subsequent social and financial benefits. PMID:16566776
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26888093
Misunderstanding analysis of covariance.
Miller, G A; Chapman, J P
2001-02-01
Despite numerous technical treatments in many venues, analysis of covariance (ANCOVA) remains a widely misused approach to dealing with substantive group differences on potential covariates, particularly in psychopathology research. Published articles reach unfounded conclusions, and some statistics texts neglect the issue. The problem with ANCOVA in such cases is reviewed. In many cases, there is no means of achieving the superficially appealing goal of "correcting" or "controlling for" real group differences on a potential covariate. In hopes of curtailing misuse of ANCOVA and promoting appropriate use, a nontechnical discussion is provided, emphasizing a substantive confound rarely articulated in textbooks and other general presentations, to complement the mathematical critiques already available. Some alternatives are discussed for contexts in which ANCOVA is inappropriate or questionable. PMID:11261398
Reconciling Covariances with Reliable Orbital Uncertainty
NASA Astrophysics Data System (ADS)
Folcik, Z.; Lue, A.; Vatsky, J.
2011-09-01
There is a common suspicion that formal covariances do not represent a realistic measure of orbital uncertainties. By devising metrics for measuring the representations of orbit error, we assess under what circumstances such lore is justified as well as the root cause of the discrepancy between the mathematics of orbital uncertainty and its practical implementation. We offer a scheme by which formal covariances may be adapted to be an accurate measure of orbital uncertainties and show how that adaptation performs against both simulated and real space-object data. We also apply these covariance adaptation methods to the process of observation association using many simulated and real data test cases. We demonstrate that covariance-informed observation association can be reliable, even in the case when only two tracks are available. Satellite breakup and collision event catalog maintenance could benefit from the automation made possible with these association methods.
NASA Astrophysics Data System (ADS)
Bourget, Antoine; Troost, Jan
2016-03-01
We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N = (4 , 4) supersymmetry in two dimensions. For seed target spaces K3 and T 4, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2008-01-01
We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
78 FR 55123 - Submission for Review: We Need Information About Your Missing Payment, RI 38-31
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-09
... may also be reported to OPM by a telephone call. Analysis Agency: Retirement Operations, Retirement... information on those who are to respond, including through the use of appropriate automated,...
Code of Federal Regulations, 2011 CFR
2011-04-01
... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits RAILROAD RETIREMENT BOARD INTERNAL ADMINISTRATION, POLICY AND PROCEDURES USE OF PENALTY MAIL TO ASSIST IN... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about...
Using Analysis of Covariance (ANCOVA) with Fallible Covariates
ERIC Educational Resources Information Center
Culpepper, Steven Andrew; Aguinis, Herman
2011-01-01
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…
What Is Missing in Counseling Research? Reporting Missing Data
ERIC Educational Resources Information Center
Sterner, William R.
2011-01-01
Missing data have long been problematic in quantitative research. Despite the statistical and methodological advances made over the past 3 decades, counseling researchers fail to provide adequate information on this phenomenon. Interpreting the complex statistical procedures and esoteric language seems to be a contributing factor. An overview of…
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Covariant deformed oscillator algebras
NASA Technical Reports Server (NTRS)
Quesne, Christiane
1995-01-01
The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.
Partial covariate adjusted regression
Şentürk, Damla; Nguyen, Danh V.
2008-01-01
Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed (Şentürk and Müller, 2005). The available data has been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set. PMID:20126296
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G.
2012-01-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data. PMID:24551316
NASA Astrophysics Data System (ADS)
Heiker, Andrea; Kutterer, Hansjörg
2010-05-01
The Earth rotation variability is redundantly described by the combination of Earth rotation parameters (polar motion and length of day), geophysical excitation functions and second degree gravity field coefficients. There exist some publications regarding the comparison of the Earth rotation parameters and excitation functions. However, most authors do not make use of the redundancy. In addition, existing covariances between the input parameters are not considered. As shown in previous publications we use the redundancy for the independent mutual validation of the Earth rotation parameters, excitation functions and second degree gravity field coefficients based on an extended Gauss-Markov model and least-squares adjustment. The work regarding the mutual validation is performed within the project P9 "Combined analysis and validation of Earth rotation models and observations" of the research Unit FOR 584 ("Earth rotation and global dynamic processes") which is funded by the German Research Unit (DFG); see also abstract "Combined Analysis and Validation of Earth Rotation Models and Observations". The adjustment model is determined at first by the joint functional relations between the parameters and second by the stochastic model of the input data. A variance-covariance component estimation is included in the adjustment model. The functional model is based on the linearized Euler-Liouville equation. The construction of an appropriate stochastic model is prevented in practice by insufficient knowledge on variances and covariances. However, some numerical results derived from arbitrarily chosen stochastic models indicate that the stochastic model may be crucial for a correct estimation. The missing information is approximated by analyzing the input data. Synthetic variance-covariance matrices are constructed by considering empirical auto- and cross-correlation functions. The influence of neglected covariances is quantified and discussed by comparing the results derived
Impact of the 235U Covariance Data in Benchmark Calculations
Leal, Luiz C; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems.
Earth Observing System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Observed Score Linear Equating with Covariates
ERIC Educational Resources Information Center
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
Covariance Analysis of Gamma Ray Spectra
Trainham, R.; Tinsley, J.
2013-01-01
The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.
Covariance analysis of gamma ray spectra
Trainham, R.; Tinsley, J.
2013-01-15
The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.
Covariant magnetic connection hypersurfaces
NASA Astrophysics Data System (ADS)
Pegoraro, F.
2016-04-01
> In the single fluid, non-relativistic, ideal magnetohydrodynamic (MHD) plasma description, magnetic field lines play a fundamental role by defining dynamically preserved `magnetic connections' between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D magnetic connection hypersurfaces in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when .
OD Covariance in Conjunction Assessment: Introduction and Issues
NASA Technical Reports Server (NTRS)
Hejduk, M. D.; Duncan, M.
2015-01-01
Primary and secondary covariances combined and projected into conjunction plane (plane perpendicular to relative velocity vector at TCA) Primary placed on x-axis at (miss distance, 0) and represented by circle of radius equal to sum of both spacecraft circumscribing radiiZ-axis perpendicular to x-axis in conjunction plane Pc is portion of combined error ellipsoid that falls within the hard-body radius circle
Cobain, Mark R; Newson, Rachel S
2014-01-01
Background Web-based health applications, such as self-assessment tools, can aid in the early detection and prevention of diseases. However, there are concerns as to whether such tools actually reach users with elevated disease risk (where prevention efforts are still viable), and whether inaccurate or missing information on risk factors may lead to incorrect evaluations. Objective This study aimed to evaluate (1) evaluate whether a Web-based cardiovascular disease (CVD) risk communication tool (Heart Age tool) was reaching users at risk of developing CVD, (2) the impact of awareness of total cholesterol (TC), HDL-cholesterol (HDL-C), and systolic blood pressure (SBP) values on the risk estimates, and (3) the key predictors of awareness and reporting of physiological risk factors. Methods Heart Age is a tool available via a free open access website. Data from 2,744,091 first-time users aged 21-80 years with no prior heart disease were collected from 13 countries in 2009-2011. Users self-reported demographic and CVD risk factor information. Based on these data, an individual’s 10-year CVD risk was calculated according to Framingham CVD risk models and translated into a Heart Age. This is the age for which the individual’s reported CVD risk would be considered “normal”. Depending on the availability of known TC, HDL-C, and SBP values, different algorithms were applied. The impact of awareness of TC, HDL-C, and SBP values on Heart Age was determined using a subsample that had complete risk factor information. Results Heart Age users (N=2,744,091) were mostly in their 20s (22.76%) and 40s (23.99%), female (56.03%), had multiple (mean 2.9, SD 1.4) risk factors, and a Heart Age exceeding their chronological age (mean 4.00, SD 6.43 years). The proportion of users unaware of their TC, HDL-C, or SBP values was high (77.47%, 93.03%, and 46.55% respectively). Lacking awareness of physiological risk factor values led to overestimation of Heart Age by an average 2
Covariance-enhanced discriminant analysis
XU, PEIRONG; ZHU, JI; ZHU, LIXING; LI, YI
2016-01-01
Summary Linear discriminant analysis has been widely used to characterize or separate multiple classes via linear combinations of features. However, the high dimensionality of features from modern biological experiments defies traditional discriminant analysis techniques. Possible interfeature correlations present additional challenges and are often underused in modelling. In this paper, by incorporating possible interfeature correlations, we propose a covariance-enhanced discriminant analysis method that simultaneously and consistently selects informative features and identifies the corresponding discriminable classes. Under mild regularity conditions, we show that the method can achieve consistent parameter estimation and model selection, and can attain an asymptotically optimal misclassification rate. Extensive simulations have verified the utility of the method, which we apply to a renal transplantation trial.
Stardust Navigation Covariance Analysis
NASA Astrophysics Data System (ADS)
Menon, Premkumar R.
2000-01-01
The Stardust spacecraft was launched on February 7, 1999 aboard a Boeing Delta-II rocket. Mission participants include the National Aeronautics and Space Administration (NASA), the Jet Propulsion Laboratory (JPL), Lockheed Martin Astronautics (LMA) and the University of Washington. The primary objective of the mission is to collect in-situ samples of the coma of comet Wild-2 and return those samples to the Earth for analysis. Mission design and operational navigation for Stardust is performed by the Jet Propulsion Laboratory (JPL). This paper will describe the extensive JPL effort in support of the Stardust pre-launch analysis of the orbit determination component of the mission covariance study. A description of the mission and it's trajectory will be provided first, followed by a discussion of the covariance procedure and models. Predicted accuracy's will be examined as they relate to navigation delivery requirements for specific critical events during the mission. Stardust was launched into a heliocentric trajectory in early 1999. It will perform an Earth Gravity Assist (EGA) on January 15, 2001 to acquire an orbit for the eventual rendezvous with comet Wild-2. The spacecraft will fly through the coma (atmosphere) on the dayside of Wild-2 on January 2, 2004. At that time samples will be obtained using an aerogel collector. After the comet encounter Stardust will return to Earth when the Sample Return Capsule (SRC) will separate and land at the Utah Test Site (UTTR) on January 15, 2006. The spacecraft will however be deflected off into a heliocentric orbit. The mission is divided into three phases for the covariance analysis. They are 1) Launch to EGA, 2) EGA to Wild-2 encounter and 3) Wild-2 encounter to Earth reentry. Orbit determination assumptions for each phase are provided. These include estimated and consider parameters and their associated a-priori uncertainties. Major perturbations to the trajectory include 19 deterministic and statistical maneuvers
COVARIANCE ASSISTED SCREENING AND ESTIMATION
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-01-01
Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X′X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model. PMID:25541567
Covariance Spectroscopy Applied to Nuclear Radiation Detection
Trainham, R., Tinsley, J., Keegan, R., Quam, W.
2011-09-01
Covariance spectroscopy is a method of processing second order moments of data to obtain information that is usually absent from average spectra. In nuclear radiation detection it represents a generalization of nuclear coincidence techniques. Correlations and fluctuations in data encode valuable information about radiation sources, transport media, and detection systems. Gaining access to the extra information can help to untangle complicated spectra, uncover overlapping peaks, accelerate source identification, and even sense directionality. Correlations existing at the source level are particularly valuable since many radioactive isotopes emit correlated gammas and neutrons. Correlations also arise from interactions within detector systems, and from scattering in the environment. In particular, correlations from Compton scattering and pair production within a detector array can be usefully exploited in scenarios where direct measurement of source correlations would be unfeasible. We present a covariance analysis of a few experimental data sets to illustrate the utility of the concept.
Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials
The Low-fidelity Covariance Project (Low-Fi) was funded in FY07-08 by DOEÆs Nuclear Criticality Safety Program (NCSP). The project was a collaboration among ANL, BNL, LANL, and ORNL. The motivation for the Low-Fi project stemmed from an imbalance in supply and demand of covariance data. The interest in, and demand for, covariance data has been in a continual uptrend over the past few years. Requirements to understand application-dependent uncertainties in simulated quantities of interest have led to the development of sensitivity / uncertainty and data adjustment software such as TSUNAMI [1] at Oak Ridge. To take full advantage of the capabilities of TSUNAMI requires general availability of covariance data. However, the supply of covariance data has not been able to keep up with the demand. This fact is highlighted by the observation that the recent release of the much-heralded ENDF/B-VII.0 included covariance data for only 26 of the 393 neutron evaluations (which is, in fact, considerably less covariance data than was included in the final ENDF/B-VI release).[Copied from R.C. Little et al., "Low-Fidelity Covariance Project", Nuclear Data Sheets 109 (2008) 2828-2833] The Low-Fi covariance data are now available at the National Nuclear Data Center. They are separate from ENDF/B-VII.0 and the NNDC warns that this information is not approved by CSEWG. NNDC describes the contents of this collection as: "Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn
Invariance of covariances arises out of noise
NASA Astrophysics Data System (ADS)
Grytskyy, D.; Tetzlaff, T.; Diesmann, M.; Helias, M.
2013-01-01
Correlated neural activity is a known feature of the brain [1] and evidence increases that it is closely linked to information processing [2]. The temporal shape of covariances has early been related to synaptic interactions and to common input shared by pairs of neurons [3]. Recent theoretical work explains the small magnitude of covariances in inhibition dominated recurrent networks by active decorrelation [4, 5, 6]. For binary neurons the mean-field approach takes random fluctuations into account to accurately predict the average activity in such networks [7] and expressions for covariances follow from a master equation [8], both briefly reviewed here for completeness. In our recent work we have shown how to map different network models, including binary networks, onto linear dynamics [9]. Binary neurons with a strong non-linear Heaviside gain function are inaccessible to the classical treatment [8]. Here we show how random fluctuations generated by the network effectively linearize the system and implement a self-regulating mechanism, that renders population-averaged covariances independent of the interaction strength and keeps the system away from instability.
The incredible shrinking covariance estimator
NASA Astrophysics Data System (ADS)
Theiler, James
2012-05-01
Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.
Covariant Electrodynamics in Vacuum
NASA Astrophysics Data System (ADS)
Wilhelm, H. E.
1990-05-01
The generalized Galilei covariant Maxwell equations and their EM field transformations are applied to the vacuum electrodynamics of a charged particle moving with an arbitrary velocity v in an inertial frame with EM carrier (ether) of velocity w. In accordance with the Galilean relativity principle, all velocities have absolute meaning (relative to the ether frame with isotropic light propagation), and the relative velocity of two bodies is defined by the linear relation uG = v1 - v2. It is shown that the electric equipotential surfaces of a charged particle are compressed in the direction parallel to its relative velocity v - w (mechanism for physical length contraction of bodies). The magnetic field H(r, t) excited in the ether by a charge e moving uniformly with velocity v is related to its electric field E(r, t) by the equation H=ɛ0(v - w)xE/[ 1 +w • (t>- w)/c20], which shows that (i) a magnetic field is excited only if the charge moves relative to the ether, and (ii) the magnetic field is weak if v - w is not comparable to the velocity of light c0 . It is remarkable that a charged particle can excite EM shock waves in the ether if |i> - w > c0. This condition is realizable for anti-parallel charge and ether velocities if |v-w| > c0- | w|, i.e., even if |v| is subluminal. The possibility of this Cerenkov effect in the ether is discussed for terrestrial and galactic situations
... majority of patients with clefts will require full orthodontic treatment, especially if the cleft has passed through ... later replacement of the missing lateral incisor. During orthodontic treatment, an artificial tooth may be attached to ...
Linear covariance analysis for gimbaled pointing systems
NASA Astrophysics Data System (ADS)
Christensen, Randall S.
Linear covariance analysis has been utilized in a wide variety of applications. Historically, the theory has made significant contributions to navigation system design and analysis. More recently, the theory has been extended to capture the combined effect of navigation errors and closed-loop control on the performance of the system. These advancements have made possible rapid analysis and comprehensive trade studies of complicated systems ranging from autonomous rendezvous to vehicle ascent trajectory analysis. Comprehensive trade studies are also needed in the area of gimbaled pointing systems where the information needs are different from previous applications. It is therefore the objective of this research to extend the capabilities of linear covariance theory to analyze the closed-loop navigation and control of a gimbaled pointing system. The extensions developed in this research include modifying the linear covariance equations to accommodate a wider variety of controllers. This enables the analysis of controllers common to gimbaled pointing systems, with internal states and associated dynamics as well as actuator command filtering and auxiliary controller measurements. The second extension is the extraction of power spectral density estimates from information available in linear covariance analysis. This information is especially important to gimbaled pointing systems where not just the variance but also the spectrum of the pointing error impacts the performance. The extended theory is applied to a model of a gimbaled pointing system which includes both flexible and rigid body elements as well as input disturbances, sensor errors, and actuator errors. The results of the analysis are validated by direct comparison to a Monte Carlo-based analysis approach. Once the developed linear covariance theory is validated, analysis techniques that are often prohibitory with Monte Carlo analysis are used to gain further insight into the system. These include the creation
Dracup, Kathleen
2002-06-01
The uniqueness of nursing research is derived from the philosophical view of the individual as a biopsychosocial being. Nurse scientists are prepared to illuminate the linkages among the biophysiological, psychological, and social domains, and this study is much enhanced by the increasing availability of valid and reliable biomarkers. Researchers need to develop expertise in the use of biomarkers and secure appropriate funding for their use. Missing links may be missing no longer. PMID:12122766
Missing Drivers with Dementia: Antecedents and Recovery
Rowe, Meredeth A.; Greenblum, Catherine A.; Boltz, Marie; Galvin, James E.
2013-01-01
OBJECTIVES To determine the circumstance in which persons with dementia become lost while driving, how missing drivers are found, and how Silver Alert notificationsare instrumental in those discoveries. DESIGN A retrospective, descriptive study. SETTING Retrospective record review. PARTICIPANTS Conducted using 156 records from the Florida Silver Alert program for the time period October, 2008 through May 2010. These alerts were issued in Florida for a missing driver with dementia. MEASUREMENTS Information derived from the reports on characteristics of the missing driver, antecedents to missing event and discovery of a missing driver. RESULTS and CONCLUSION The majority of missing drivers were males, with ages ranging from 58’94, who were being cared for by a spouse. Most drivers became lost on routine, caregiver-sanctioned trips to usual locations. Only 15% were in the act of driving when found with most being found in or near a parked car and the large majority were found by law enforcement officers. Only 40% were found in the county they went missing and 10% were found in a different state. Silver Alert notifications were most effective for law enforcement; citizen alerts resulted in a few discoveries. There was a 5% mortality rate in the study population with those living alone more likely to be found dead than alive. An additional 15% were found in dangerous situations such as stopped on railroad tracks. Thirty-two percent had documented driving or dangerous errors such as, driving thewrong way or into secluded areas, or walking in or near roadways. PMID:23134069
Sensitivity of missing values in classification tree for large sample
NASA Astrophysics Data System (ADS)
Hasan, Norsida; Adam, Mohd Bakri; Mustapha, Norwati; Abu Bakar, Mohd Rizam
2012-05-01
Missing values either in predictor or in response variables are a very common problem in statistics and data mining. Cases with missing values are often ignored which results in loss of information and possible bias. The objectives of our research were to investigate the sensitivity of missing data in classification tree model for large sample. Data were obtained from one of the high level educational institutions in Malaysia. Students' background data were randomly eliminated and classification tree was used to predict students degree classification. The results showed that for large sample, the structure of the classification tree was sensitive to missing values especially for sample contains more than ten percent missing values.
Covariant Closed String Coherent States
Hindmarsh, Mark; Skliros, Dimitri
2011-02-25
We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops.
Covariant closed string coherent states.
Hindmarsh, Mark; Skliros, Dimitri
2011-02-25
We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops. PMID:21405564
Covariance tracking: architecture optimizations for embedded systems
NASA Astrophysics Data System (ADS)
Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan
2014-12-01
Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.
Development of covariance capabilities in EMPIRE code
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Development of Covariance Capabilities in EMPIRE Code
Herman, M. Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-12-15
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
RNA sequence analysis using covariance models.
Eddy, S R; Durbin, R
1994-01-01
We describe a general approach to several RNA sequence analysis problems using probabilistic models that flexibly describe the secondary structure and primary sequence consensus of an RNA sequence family. We call these models 'covariance models'. A covariance model of tRNA sequences is an extremely sensitive and discriminative tool for searching for additional tRNAs and tRNA-related sequences in sequence databases. A model can be built automatically from an existing sequence alignment. We also describe an algorithm for learning a model and hence a consensus secondary structure from initially unaligned example sequences and no prior structural information. Models trained on unaligned tRNA examples correctly predict tRNA secondary structure and produce high-quality multiple alignments. The approach may be applied to any family of small RNA sequences. Images PMID:8029015
Covariance Matrix Evaluations for Independent Mass Fission Yields
NASA Astrophysics Data System (ADS)
Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.; Sumini, M.
2015-01-01
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of 235U(nth, f) and 239Pu(nth, f) reactions.
Covariance Matrix Evaluations for Independent Mass Fission Yields
Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.
Restoration of HST images with missing data
NASA Technical Reports Server (NTRS)
Adorf, Hans-Martin
1992-01-01
Missing data are a fairly common problem when restoring Hubble Space Telescope observations of extended sources. On Wide Field and Planetary Camera images cosmic ray hits and CCD hot spots are the prevalent causes of data losses, whereas on Faint Object Camera images data are lossed due to reseaux marks, blemishes, areas of saturation and the omnipresent frame edges. This contribution discusses a technique for 'filling in' missing data by statistical inference using information from the surrounding pixels. The major gain consists in minimizing adverse spill-over effects to the restoration in areas neighboring those where data are missing. When the mask delineating the support of 'missing data' is made dynamic, cosmic ray hits, etc. can be detected on the fly during restoration.
Shrinkage estimators for covariance matrices.
Daniels, M J; Kass, R E
2001-12-01
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically
Automatic Classification of Variable Stars in Catalogs with Missing Data
NASA Astrophysics Data System (ADS)
Pichara, Karim; Protopapas, Pavlos
2013-11-01
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.
AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA
Pichara, Karim; Protopapas, Pavlos
2013-11-10
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.
NASA Astrophysics Data System (ADS)
Hough, S. E.; Martin, S.
2013-12-01
The occurrence of three earthquakes with Mw greater than 8.8, and six earthquakes larger than Mw8.5, since 2004 has raised interest in the long-term rate of great earthquakes. Past studies have focused on rates since 1900, which roughly marks the start of the instrumental era. Yet substantial information is available for earthquakes prior to 1900. A re-examination of the catalog of global historical earthquakes reveals a paucity of Mw ≥ 8.5 events during the 18th and 19th centuries compared to the rate during the instrumental era (Hough, 2013, JGR), suggesting that the magnitudes of some documented historical earthquakes have been underestimated, with approximately half of all Mw≥8.5 earthquakes missing or underestimated in the 19th century. Very large (Mw≥8.5) magnitudes have traditionally been estimated for historical earthquakes only from tsunami observations given a tautological assumption that all such earthquakes generate significant tsunamis. Magnitudes would therefore tend to be underestimated for deep megathrust earthquakes that generated relatively small tsunamis, deep earthquakes within continental collision zones, earthquakes that produced tsunamis that were not documented, outer rise events, and strike-slip earthquakes such as the 11 April 2012 Sumatra event. We further show that, where magnitudes of historical earthquakes are estimated from earthquake intensities using the Bakun and Wentworth (1997, BSSA) method, magnitudes of great earthquakes can be significantly underestimated. Candidate 'missing' great 19th century earthquakes include the 1843 Lesser Antilles earthquake, which recent studies suggest was significantly larger than initial estimates (Feuillet et al., 2012, JGR; Hough, 2013), and an 1841 Kamchatka event, for which Mw9 was estimated by Gusev and Shumilina (2004, Izv. Phys. Solid Ear.). We consider cumulative moment release rates during the 19th century compared to that during the 20th and 21st centuries, using both the Hough
Missed opportunities in crystallography.
Dauter, Zbigniew; Jaskolski, Mariusz
2014-09-01
Scrutinized from the perspective of time, the giants in the history of crystallography more than once missed a nearly obvious chance to make another great discovery, or went in the wrong direction. This review analyzes such missed opportunities focusing on macromolecular crystallographers (using Perutz, Pauling, Franklin as examples), although cases of particular historical (Kepler), methodological (Laue, Patterson) or structural (Pauling, Ramachandran) relevance are also described. Linus Pauling, in particular, is presented several times in different circumstances, as a man of vision, oversight, or even blindness. His example underscores the simple truth that also in science incessant creativity is inevitably connected with some probability of fault. PMID:24814223
Partial covariance mapping techniques at FELs
NASA Astrophysics Data System (ADS)
Frasinski, Leszek
2014-05-01
The development of free-electron lasers (FELs) is driven by the desire to access the structure and chemical dynamics of biomolecules with atomic resolution. Short, intense FEL pulses have the potential to record x-ray diffraction images before the molecular structure is destroyed by radiation damage. However, even during the shortest, few-femtosecond pulses currently available, there are some significant changes induced by massive ionisation and onset of Coulomb explosion. To interpret the diffraction images it is vital to gain insight into the electronic and nuclear dynamics during multiple core and valence ionisations that compete with Auger cascades. This paper focuses on a technique that is capable to probe these processes. The covariance mapping technique is well suited to the high intensity and low repetition rate of FEL pulses. While the multitude of charges ejected at each pulse overwhelm conventional coincidence methods, an improved technique of partial covariance mapping can cope with hundreds of photoelectrons or photoions detected at each FEL shot. The technique, however, often reveals spurious, uninteresting correlations that spoil the maps. This work will discuss the strengths and limitations of various forms of covariance mapping techniques. Quantitative information extracted from the maps will be linked to theoretical modelling of ionisation and fragmentation paths. Special attention will be given to critical experimental parameters, such as counting rate, FEL intensity fluctuations, vacuum impurities or detector efficiency and nonlinearities. Methods of assessing and optimising signal-to-noise ratio will be described. Emphasis will be put on possible future developments such as multidimensional covariance mapping, compensation for various experimental instabilities and improvements in the detector response. This work has been supported the EPSRC, UK (grants EP/F021232/1 and EP/I032517/1).
ERIC Educational Resources Information Center
Hawley, Richard A.
1995-01-01
Suggests that a way out of the current malaise of American education may be to locate educational excellence in accessible American fiction. Discusses Frances Gray Patton's "Good Morning, Miss Dove," in which the central character is an elementary school geography teacher. (RS)
Mujahid, Mahasin S.; Janz, Nancy K.; Hawley, Sarah T.; Griggs, Jennifer J.; Hamilton, Ann S.; Katz, Steven J.
2016-01-01
Work loss is a potential adverse consequence of cancer. There is limited research on patterns and correlates of paid work after diagnosis of breast cancer, especially among ethnic minorities. Women with non-metastatic breast cancer diagnosed from June 2005 to May 2006 who reported to the Los Angeles County SEER registry were identified and asked to complete the survey after initial treatment (median time from diagnosis = 8.9 months). Latina and African American women were over-sampled. Analyses were restricted to women working at the time of diagnosis, <65 years of age, and who had complete covariate information (N = 589). The outcome of the study was missed paid work (≤ month, >1 month, stopped all together). Approximately 44, 24, and 32% of women missed ≤1 month, >1 month, or stopped working, respectively. African Americans and Latinas were more likely to stop working when compared with Whites [OR for stop working vs. missed ≤1 month: 3.0, 3.4, (P < 0.001), respectively]. Women receiving mastectomy and those receiving chemotherapy were also more likely to stop working, independent of sociodemographic and treatment factors [ORs for stopped working vs. missed ≤1 month: 4.2, P < 0.001; 7.9, P < 0.001, respectively]. Not having a flexible work schedule available through work was detrimental to working [ORs for stopped working 18.9, P < 0.001 after adjusting for sociodemographic and treatment factors]. Many women stop working altogether after a diagnosis of breast cancer, particularly if they are racial/ethnic minorities, receive chemotherapy, or those who are employed in an unsupportive work settings. Health care providers need to be aware of these adverse consequences of breast cancer diagnosis and initial treatment. PMID:19360466
Hirarchical Bayesian Spatio-Temporal Interpolation including Covariates
NASA Astrophysics Data System (ADS)
Hussain, Ijaz; Mohsin, Muhammad; Spoeck, Gunter; Pilz, Juergen
2010-05-01
The space-time interpolation of precipitation has significant contribution to river control,reservoir operations, forestry interest and flash flood watches etc. The changes in environmental covariates and spatial covariates make space-time estimation of precipitation a challenging task. In our earlier paper [1], we used transformed hirarchical Bayesian sapce-time interpolation method for predicting the amount of precipiation. In present paper, we modified the [2] method to include covarites which varaies with respect to space-time. The proposed method is applied to estimating space-time monthly precipitation in the monsoon periods during 1974 - 2000. The 27-years monthly average data of precipitation, temperature, humidity and wind speed are obtained from 51 monitoring stations in Pakistan. The average monthly precipitation is used response variable and temperature, humidity and wind speed are used as time varying covariates. Moreovere the spatial covarites elevation, latitude and longitude of same monitoring stations are also included. The cross-validation method is used to compare the results of transformed hierarchical Bayesian spatio-temporal interpolation with and without including environmental and spatial covariates. The software of [3] is modified to incorprate enviornmental covariates and spatil covarites. It is observed that the transformed hierarchical Bayesian method including covarites provides more accuracy than the transformed hierarchical Bayesian method without including covarites. Moreover, the five potential monitoring cites are selected based on maximum entropy sampaling design approach. References [1] I.Hussain, J.Pilz,G. Spoeck and H.L.Yu. Spatio-Temporal Interpolation of Precipitation during Monsoon Periods in Pakistan. submitted in Advances in water Resources,2009. [2] N.D. Le, W. Sun, and J.V. Zidek, Bayesian multivariate spatial interpolation with data missing by design. Journal of the Royal Statistical Society. Series B (Methodological
Are Eddy Covariance series stationary?
Technology Transfer Automated Retrieval System (TEKTRAN)
Spectral analysis via a discrete Fourier transform is used often to examine eddy covariance series for cycles (eddies) of interest. Generally the analysis is performed on hourly or half-hourly data sets collected at 10 or 20 Hz. Each original series is often assumed to be stationary. Also automated ...
Gaussian covariance matrices for anisotropic galaxy clustering measurements
NASA Astrophysics Data System (ADS)
Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio
2016-04-01
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.
New capabilities for processing covariance data in resonance region
Wiarda, D.; Dunn, M. E.; Greene, N. M.; Larson, N. M.; Leal, L. C.
2006-07-01
The AMPX [1] code system is a modular system of FORTRAN computer programs that relate to nuclear analysis with a primary emphasis on tasks associated with the production and use of multi group and continuous energy cross sections. The module PUFF-III within this code system handles the creation of multi group covariance data from ENDF information. The resulting covariances are saved in COVERX format [2]. We recently expanded the capabilities of PUFF-III to include full handling of covariance data in the resonance region (resolved as well as unresolved). The new program handles all resonance covariance formats in File 32 except for the long-range covariance sub sections. The new program has been named PUFF-IV. To our knowledge, PUFF-IV is the first processing code that can address both the new ENDF format for resolved resonance parameters and the new ENDF 'compact' covariance format. The existing code base was rewritten in Fortran 90 to allow for a more modular design. Results are identical between the new and old versions within rounding errors, where applicable. Automatic test cases have been added to ensure that consistent results are generated across computer systems. (authors)
Realization of the optimal phase-covariant quantum cloning machine
Sciarrino, Fabio; De Martini, Francesco
2005-12-15
In several quantum information (QI) phenomena of large technological importance the information is carried by the phase of the quantum superposition states, or qubits. The phase-covariant cloning machine (PQCM) addresses precisely the problem of optimally copying these qubits with the largest attainable 'fidelity'. We present a general scheme which realizes the 1{yields}3 phase covariant cloning process by a combination of three different QI processes: the universal cloning, the NOT gate, and the projection over the symmetric subspace of the output qubits. The experimental implementation of a PQCM for polarization encoded qubits, the first ever realized with photons, is reported.
40 CFR 98.85 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) or the annual organic carbon content of raw materials are missing, facilities must undertake a new... each missing value of monthly raw material consumption the substitute data value must be the best available estimate of the monthly raw material consumption based on information used for accounting...
Seaman, Shaun R; White, Ian R; Carpenter, James R
2015-01-01
Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available. PMID:24525487
Missing people, migrants, identification and human rights.
Nuzzolese, E
2012-11-01
The increasing volume and complexities of migratory flow has led to a range of problems such as human rights issues, public health, disease and border control, and also the regulatory processes. As result of war or internal conflicts missing person cases and management have to be regarded as a worldwide issue. On the other hand, even in peace, the issue of a missing person is still relevant. In 2007 the Italian Ministry of Interior nominated an extraordinary commissar in order to analyse and assess the total number of unidentified recovered bodies and verify the extent of the phenomena of missing persons, reported as 24,912 people in Italy (updated 31 December 2011). Of these 15,632 persons are of foreigner nationalities and are still missing. The census of the unidentified bodies revealed a total of 832 cases recovered in Italy since the year 1974. These bodies/human remains received a regular autopsy and were buried as 'corpse without name". In Italy judicial autopsy is performed to establish cause of death and identity, but odontology and dental radiology is rarely employed in identification cases. Nevertheless, odontologists can substantiate the identification through the 'biological profile' providing further information that can narrow the search to a smaller number of missing individuals even when no ante mortem dental data are available. The forensic dental community should put greater emphasis on the role of the forensic odontology as a tool for humanitarian action of unidentified individuals and best practise in human identification. PMID:23221266
Minimal unitary (covariant) scattering theory
Lindesay, J.V.; Markevich, A.
1983-06-01
In the minimal three particle equations developed by Lindesay the two body input amplitude was an on shell relativistic generalization of the non-relativistic scattering model characterized by a single mass parameter ..mu.. which in the two body (m + m) system looks like an s-channel bound state (..mu.. < 2m) or virtual state (..mu.. > 2m). Using this driving term in covariant Faddeev equations generates a rich covariant and unitary three particle dynamics. However, the simplest way of writing the relativisitic generalization of the Faddeev equations can take the on shell Mandelstam parameter s = 4(q/sup 2/ + m/sup 2/), in terms of which the two particle input is expressed, to negative values in the range of integration required by the dynamics. This problem was met in the original treatment by multiplying the two particle input amplitude by THETA(s). This paper provides what we hope to be a more direct way of meeting the problem.
Realistic Covariance Prediction for the Earth Science Constellation
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.
Realistic Covariance Prediction For the Earth Science Constellations
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra
Covariant jump conditions in electromagnetism
NASA Astrophysics Data System (ADS)
Itin, Yakov
2012-02-01
A generally covariant four-dimensional representation of Maxwell's electrodynamics in a generic material medium can be achieved straightforwardly in the metric-free formulation of electromagnetism. In this setup, the electromagnetic phenomena are described by two tensor fields, which satisfy Maxwell's equations. A generic tensorial constitutive relation between these fields is an independent ingredient of the theory. By use of different constitutive relations (local and non-local, linear and non-linear, etc.), a wide area of applications can be covered. In the current paper, we present the jump conditions for the fields and for the energy-momentum tensor on an arbitrarily moving surface between two media. From the differential and integral Maxwell equations, we derive the covariant boundary conditions, which are independent of any metric and connection. These conditions include the covariantly defined surface current and are applicable to an arbitrarily moving smooth curved boundary surface. As an application of the presented jump formulas, we derive a Lorentzian type metric as a condition for existence of the wave front in isotropic media. This result holds for ordinary materials as well as for metamaterials with negative material constants.
ERIC Educational Resources Information Center
Enders, Craig K.
2004-01-01
A method for incorporating maximum likelihood (ML) estimation into reliability analyses with item-level missing data is outlined. An ML estimate of the covariance matrix is first obtained using the expectation maximization (EM) algorithm, and coefficient alpha is subsequently computed using standard formulae. A simulation study demonstrated that…
Comparison of Modern Methods for Analyzing Repeated Measures Data with Missing Values
ERIC Educational Resources Information Center
Vallejo, G.; Fernandez, M. P.; Livacic-Rojas, P. E.; Tuero-Herrero, E.
2011-01-01
Missing data are a pervasive problem in many psychological applications in the real world. In this article we study the impact of dropout on the operational characteristics of several approaches that can be easily implemented with commercially available software. These approaches include the covariance pattern model based on an unstructured…
Smith, D.L.
1988-01-01
The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurement standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs.
Estimated Environmental Exposures for MISSE-3 and MISSE-4
NASA Technical Reports Server (NTRS)
Finckenor, Miria M.; Pippin, Gary; Kinard, William H.
2008-01-01
Describes the estimated environmental exposure for MISSE-2 and MISSE-4. These test beds, attached to the outside of the International Space Station, were planned for 3 years of exposure. This was changed to 1 year after MISSE-1 and -2 were in space for 4 years. MISSE-3 and -4 operate in a low Earth orbit space environment, which exposes them to a variety of assaults including atomic oxygen, ultraviolet radiation, particulate radiation, thermal cycling, and meteoroid/space debris impact, as well as contamination associated with proximity to an active space station. Measurements and determinations of atomic oxygen fluences, solar UV exposure levels, molecular contamination levels, and particulate radiation are included.
Connecting Math and Motion: A Covariational Approach
NASA Astrophysics Data System (ADS)
Culbertson, Robert J.; Thompson, A. S.
2006-12-01
We define covariational reasoning as the ability to correlate changes in two connected variables. For example, the ability to describe the height of fluid in an odd-shaped vessel as a function of fluid volume requires covariational reasoning skills. Covariational reasoning ability is an essential resource for gaining a deep understanding of the physics of motion. We have developed an approach for teaching physical science to in-service math and science high school teachers that emphasizes covariational reasoning. Several examples of covariation and results from a small cohort of local teachers will be presented.
Covariance Evaluation Methodology for Neutron Cross Sections
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Recurrence Analysis of Eddy Covariance Fluxes
NASA Astrophysics Data System (ADS)
Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael
2015-04-01
The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.
Covariance Structure Models for Gene Expression Microarray Data
ERIC Educational Resources Information Center
Xie, Jun; Bentler, Peter M.
2003-01-01
Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…
Identifying Heat Waves in Florida: Considerations of Missing Weather Data
Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.
2015-01-01
Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198
Phase-covariant quantum benchmarks
Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.
2009-05-15
We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.
Data Covariances from R-Matrix Analyses of Light Nuclei
Hale, G.M. Paris, M.W.
2015-01-15
After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances ({sup 5}He) and with many resonances ({sup 13}C ). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.
Data Covariances from R-Matrix Analyses of Light Nuclei
NASA Astrophysics Data System (ADS)
Hale, G. M.; Paris, M. W.
2015-01-01
After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances (5He) and with many resonances (13C). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.
NASA Astrophysics Data System (ADS)
Campbell, A. K.
2003-07-01
Throughout his life, Fred Hoyle had a keen interest in evolution. He argued that natural selection by small, random change, as conceived by Charles Darwin and Alfred Russel Wallace, could not explain either the origin of life or the origin of a new protein. The idea of natural selection, Hoyle told us, wasn't even Darwin's original idea in the first place. Here, in honour of Hoyle's analysis, I propose a solution to Hoyle's dilemma. His solution was life from space - panspermia. But the real key to understanding natural selection is `molecular biodiversity'. This explains the things Darwin missed - the origin of species and the origin of extinction. It is also a beautiful example of the mystery disease that afflicted Darwin for over 40 years, for which we now have an answer.
Commonly missed orthopedic problems.
Ballas, M T; Tytko, J; Mannarino, F
1998-01-15
When not diagnosed early and managed appropriately, common musculoskeletal injuries may result in long-term disabling conditions. Anterior cruciate ligament tears are some of the most common knee ligament injuries. Slipped capital femoral epiphysis may present with little or no hip pain, and subtle or absent physical and radiographic findings. Femoral neck stress fractures, if left untreated, may result in avascular necrosis, refractures and pseudoarthrosis. A delay in diagnosis of scaphoid fractures may cause early wrist arthrosis if nonunion results. Ulnar collateral ligament tears are a frequently overlooked injury in skiers. The diagnosis of Achilles tendon rupture is missed as often as 25 percent of the time. Posterior tibial tendon tears may result in fixed bony planus if diagnosis is delayed, necessitating hindfoot fusion rather than simple soft tissue repair. Family physicians should be familiar with the initial assessment of these conditions and, when appropriate, refer patients promptly to an orthopedic surgeon. PMID:9456991
The Impact of Nonignorable Missing Data on the Inference of Regression Coefficients.
ERIC Educational Resources Information Center
Min, Kyung-Seok; Frank, Kenneth A.
Various statistical methods have been available to deal with missing data problems, but the difficulty is that they are based on somewhat restrictive assumptions that missing patterns are known or can be modeled with auxiliary information. This paper treats the presence of missing cases from the viewpoint that generalization as a sample does not…
The Concept of Missing Incidents in Persons with Dementia
Rowe, Meredeth; Houston, Amy; Molinari, Victor; Bulat, Tatjana; Bowen, Mary Elizabeth; Spring, Heather; Mutolo, Sandra; McKenzie, Barbara
2015-01-01
Behavioral symptoms of dementia often present the greatest challenge for informal caregivers. One behavior, that is a constant concern for caregivers, is the person with dementia leaving a designated area such that their whereabouts become unknown to the caregiver or a missing incident. Based on an extensive literature review and published findings of their own research, members of the International Consortium on Wandering and Missing Incidents constructed a preliminary missing incidents model. Examining the evidence base, specific factors within each category of the model were further described, reviewed and modified until consensus was reached regarding the final model. The model begins to explain in particular the variety of antecedents that are related to missing incidents. The model presented in this paper is designed to be heuristic and may be used to stimulate discussion and the development of effective preventative and response strategies for missing incidents among persons with dementia. PMID:27417817
Parameter inference with estimated covariance matrices
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2016-02-01
When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.
FILIF Measurements of HCHO Vertical Gradients and Flux via Eddy Covariance during BEACHON-ROCS 2010
NASA Astrophysics Data System (ADS)
Digangi, J. P.; Boyle, E.; Henry, S. B.; Keutsch, F. N.; Beachon-Rocs Science Team
2010-12-01
Models of HOx chemistry in rural (low NOx) environments can drastically underpredict OH concentrations compared to measurements. In addition, models of OH reactivity based on modeled VOC emissions also underpredict OH reactivity. The combination of these facts implies a significant misunderstanding of HOx chemistry in rural environments. Formaldehyde (HCHO) is one of the most ubiquitous VOC oxidation products and therefore is an important tracer of VOC oxidation. Formaldehyde may be formed via the fast oxidation of biogenic VOCs (BVOCs), such as isoprene and terpenes emitted from forests, giving a measure of any potential missing VOCs as a cause of the inconsistency in OH reactivity. Also, as the loss pathways of HCHO are well understood, HCHO concentrations can provide further information about OH concentrations. As a result, measurements of HCHO gradients and fluxes in pristine forests can provide valuable insight into this rural HOx chemistry. We present the first reported measurements of HCHO flux via eddy covariance, as well as HCHO concentrations and gradients as observed by the Madison FIber Laser-Induced Fluorescence (FILIF) Instrument during the BEACHON-ROCS 2010 campaign in a rural coniferous forest northwest of Colorado Springs, CO. Midday upward HCHO fluxes as high as 150 μg/m2/hr were observed. These results will be discussed in the context of rapid in-canopy BVOC oxidation and the uncertainties in the HOx budget inside forest canopies.
Georgiou, Andrew; Hyppönen, Hannele; Ammenwerth, Elske; de Keizer, Nicolette; Magrabi, Farah; Scott, Philip
2015-01-01
Summary Objectives To review the potential contribution of Information and Communication Technology (ICT) to enable patient-centric and coordinated care, and in particular to explore the role of patient portals as a developing ICT tool, to assess the available evidence, and to describe the evaluation challenges. Methods Reviews of IMIA, EFMI, and other initiatives, together with literature reviews. Results We present the progression from care coordination to care integration, and from patient-centric to person-centric approaches. We describe the different roles of ICT as an enabler of the effective presentation of information as and when needed. We focus on the patient’s role as a co-producer of health as well as the focus and purpose of care. We discuss the need for changing organisational processes as well as the current mixed evidence regarding patient portals as a logical tool, and the reasons for this dichotomy, together with the evaluation principles supported by theoretical frameworks so as to yield robust evidence. Conclusions There is expressed commitment to coordinated care and to putting the patient in the centre. However to achieve this, new interactive patient portals will be needed to enable peer communication by all stakeholders including patients and professionals. Few portals capable of this exist to date. The evaluation of these portals as enablers of system change, rather than as simple windows into electronic records, is at an early stage and novel evaluation approaches are needed. PMID:26123909
NASA Astrophysics Data System (ADS)
Steffen, Jason H.
2013-08-01
We investigate the distributions of the orbital period ratios of adjacent planets in high-multiplicity Kepler systems (four or more planets) and low-multiplicity systems (two planets). Modelling the low-multiplicity sample as essentially equivalent to the high-multiplicity sample, but with unobserved intermediate planets, we find some evidence for an excess of planet pairs between the 2:1 and 3:1 mean-motion resonances in the low-multiplicity sample. This possible excess may be the result of strong dynamical interactions near these or other resonances or it may be a byproduct of other evolutionary events or processes such as planetary collisions. Three-planet systems show a significant excess of planets near the 2:1 mean-motion resonance that is not as prominent in either of the other samples. This observation may imply a correlation between strong dynamical interactions and observed planet number - perhaps a relationship between resonance pairs and the inclinations or orbital periods of additional planets. The period ratio distributions can also be used to identify targets to search for missing planets in the each of the samples, the presence or absence of which would have strong implications for planet formation and dynamical evolution models.
Efficient retrieval of landscape Hessian: forced optimal covariance adaptive learning.
Shir, Ofer M; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel
2014-06-01
Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳10^{4}). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes. PMID:25019911
Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis
NASA Astrophysics Data System (ADS)
George, E., Chan, K.
2012-09-01
Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for
Hsu, Li; Prentice, Ross L; Stanford, Janet L
2002-03-30
In a typical case-control family study, detailed risk factor information is often collected on cases and controls, but not on their relatives for reasons of cost and logistical difficulty in locating the relatives. The impact of missing risk factor information for relatives on estimation of the strength of dependence between the disease risk of pairs of relatives is largely unknown. In this paper, we extend our earlier work on estimating the dependence of ages at onset between paired relatives from case-control family data to include covariates on cases and controls, and possibly relatives. Using population-based case-control families as our basic data structure, we study the effect of missing covariates for relatives and/or cases and controls on the bias of certain dependence parameter estimators via a simulation study. Finally we illustrate various analyses using a case-control family study of early onset prostate cancer. PMID:11870822
The impact of aging on gray matter structural covariance networks.
Montembeault, Maxime; Joubert, Sven; Doyon, Julien; Carrier, Julie; Gagnon, Jean-François; Monchi, Oury; Lungu, Ovidiu; Belleville, Sylvie; Brambati, Simona Maria
2012-11-01
Previous anatomical volumetric studies have shown that healthy aging is associated with gray matter tissue loss in specific cerebral regions. However, these studies may have potentially missed critical elements of age-related brain changes, which largely exist within interrelationships among brain regions. This magnetic resonance imaging research aims to assess the effects of aging on the organization of gray matter structural covariance networks. Here, we used voxel-based morphometry on high-definition brain scans to compare the patterns of gray matter structural covariance networks that sustain different sensorimotor and high-order cognitive functions among young (n=88, mean age=23.5±3.1 years, female/male=55/33) and older (n=88, mean age=67.3±5.9 years, female/male=55/33) participants. This approach relies on the assumption that functionally correlated brain regions show correlations in gray matter volume as a result of mutually trophic influences or common experience-related plasticity. We found reduced structural association in older adults compared with younger adults, specifically in high-order cognitive networks. Major differences were observed in the structural covariance networks that subserve the following: a) the language-related semantic network, b) the executive control network, and c) the default-mode network. Moreover, these cognitive functions are typically altered in the older population. Our results indicate that healthy aging alters the structural organization of cognitive networks, shifting from a more distributed (in young adulthood) to a more localized topological organization in older individuals. PMID:22776455
Particle emission from covariant phase space
Bambah, B.A. )
1992-12-01
Using Lorentz-covariant sources, we calculate the multiplicity distribution of {ital n} pair correlated particles emerging from a Lorentz-covariant phase-space volume. We use the Kim-Wigner formalism and identify these sources as the squeezed states of a relativistic harmonic oscillator. The applications of this to multiplicity distributions in particle physics is discussed.
Group Theory of Covariant Harmonic Oscillators
ERIC Educational Resources Information Center
Kim, Y. S.; Noz, Marilyn E.
1978-01-01
A simple and concrete example for illustrating the properties of noncompact groups is presented. The example is based on the covariant harmonic-oscillator formalism in which the relativistic wave functions carry a covariant-probability interpretation. This can be used in a group theory course for graduate students who have some background in…
Quality Quantification of Evaluated Cross Section Covariances
Varet, S.; Dossantos-Uzarralde, P.
2015-01-15
Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.
REGRESSION METHODS FOR DATA WITH INCOMPLETE COVARIATES
Modern statistical methods in chronic disease epidemiology allow simultaneous regression of disease status on several covariates. hese methods permit examination of the effects of one covariate while controlling for those of others that may be causally related to the disease. owe...
Bispectrum covariance in the flat-sky limit
NASA Astrophysics Data System (ADS)
Joachimi, B.; Shi, X.; Schneider, P.
2009-12-01
Aims. To probe cosmological fields beyond the Gaussian level, three-point statistics can be used, all of which are related to the bispectrum. Hence, measurements of CMB anisotropies, galaxy clustering, and weak gravitational lensing alike have to rely upon an accurate theoretical background concerning the bispectrum and its noise properties. If only small portions of the sky are considered, it is often desirable to perform the analysis in the flat-sky limit. We aim at a formal, detailed derivation of the bispectrum covariance in the flat-sky approximation, focusing on a pure two-dimensional Fourier-plane approach. Methods: We define an unbiased estimator of the bispectrum, which takes the average over the overlap of annuli in Fourier space, and compute its full covariance. The outcome of our formalism is compared to the flat-sky spherical harmonic approximation in terms of the covariance, the behavior under parity transformations, and the information content. We introduce a geometrical interpretation of the averaging process in the estimator, thus providing an intuitive understanding. Results: Contrary to foregoing work, we find a difference by a factor of two between the covariances of the Fourier-plane and the spherical harmonic approach. We argue that this discrepancy can be explained by the differing behavior with respect to parity. However, in an exemplary analysis it is demonstrated that the Fisher information of both formalisms agrees to high accuracy. Via the geometrical interpretation we are able to link the normalization in the bispectrum estimator to the area enclosed by the triangle configuration at consideration as well as to the Wigner symbol, which leads to convenient approximation formulae for the covariances of both approaches.
Fully Bayesian inference under ignorable missingness in the presence of auxiliary covariates
Daniels, M.J.; Wang, C.; Marcus, B.H.
2014-01-01
In order to make a missing at random (MAR) or ignorability assumption realistic, auxiliary covariates are often required. However, the auxiliary covariates are not desired in the model for inference. Typical multiple imputation approaches do not assume that the imputation model marginalizes to the inference model. This has been termed ‘uncongenial’ (Meng, 1994). In order to make the two models congenial (or compatible), we would rather not assume a parametric model for the marginal distribution of the auxiliary covariates, but we typically do not have enough data to estimate the joint distribution well non-parametrically. In addition, when the imputation model uses a non-linear link function (e.g., the logistic link for a binary response), the marginalization over the auxiliary covariates to derive the inference model typically results in a difficult to interpret form for effect of covariates. In this article, we propose a fully Bayesian approach to ensure that the models are compatible for incomplete longitudinal data by embedding an interpretable inference model within an imputation model and that also addresses the two complications described above. We evaluate the approach via simulations and implement it on a recent clinical trial. PMID:24571539
Effect modification by time-varying covariates.
Robins, James M; Hernán, Miguel A; Rotnitzky, Andrea
2007-11-01
Marginal structural models (MSMs) allow estimation of effect modification by baseline covariates, but they are less useful for estimating effect modification by evolving time-varying covariates. Rather, structural nested models (SNMs) were specifically designed to estimate effect modification by time-varying covariates. In their paper, Petersen et al. (Am J Epidemiol 2007;166:985-993) describe history-adjusted MSMs as a generalized form of MSM and argue that history-adjusted MSMs allow a researcher to easily estimate effect modification by time-varying covariates. However, history-adjusted MSMs can result in logically incompatible parameter estimates and hence in contradictory substantive conclusions. Here the authors propose a more restrictive definition of history-adjusted MSMs than the one provided by Petersen et al. and compare the advantages and disadvantages of using history-adjusted MSMs, as opposed to SNMs, to examine effect modification by time-dependent covariates. PMID:17875581
Adjoints and Low-rank Covariance Representation
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.
2000-01-01
Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.
Covariance matrices for use in criticality safety predictability studies
Derrien, H.; Larson, N.M.; Leal, L.C.
1997-09-01
Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.
Treatment decisions based on scalar and functional baseline covariates.
Ciarleglio, Adam; Petkova, Eva; Ogden, R Todd; Tarpey, Thaddeus
2015-12-01
The amount and complexity of patient-level data being collected in randomized-controlled trials offer both opportunities and challenges for developing personalized rules for assigning treatment for a given disease or ailment. For example, trials examining treatments for major depressive disorder are not only collecting typical baseline data such as age, gender, or scores on various tests, but also data that measure the structure and function of the brain such as images from magnetic resonance imaging (MRI), functional MRI (fMRI), or electroencephalography (EEG). These latter types of data have an inherent structure and may be considered as functional data. We propose an approach that uses baseline covariates, both scalars and functions, to aid in the selection of an optimal treatment. In addition to providing information on which treatment should be selected for a new patient, the estimated regime has the potential to provide insight into the relationship between treatment response and the set of baseline covariates. Our approach can be viewed as an extension of "advantage learning" to include both scalar and functional covariates. We describe our method and how to implement it using existing software. Empirical performance of our method is evaluated with simulated data in a variety of settings and also applied to data arising from a study of patients with major depressive disorder from whom baseline scalar covariates as well as functional data from EEG are available. PMID:26111145
Chaussé, Pierre; Liu, Jin; Luta, George
2016-01-01
Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. PMID:27077870
Kettlewell's Missing Evidence.
ERIC Educational Resources Information Center
Allchin, Douglas Kellogg
2002-01-01
The standard textbook account of Kettlewell and the peppered moths omits significant information. Suggests that this case can be used to reflect on the role of simplification in science teaching. (Author/MM)
Modelling categorical covariates in Bayesian disease mapping by partition structures.
Giudici, P; Knorr-Held, L; Rasser, G
We consider the problem of mapping the risk from a disease using a series of regional counts of observed and expected cases, and information on potential risk factors. To analyse this problem from a Bayesian viewpoint, we propose a methodology which extends a spatial partition model by including categorical covariate information. Such an extension allows detection of clusters in the residual variation, reflecting further, possibly unobserved, covariates. The methodology is implemented by means of reversible jump Markov chain Monte Carlo sampling. An application is presented in order to illustrate and compare our proposed extensions with a purely spatial partition model. Here we analyse a well-known data set on lip cancer incidence in Scotland. PMID:10960873
Modeling missing data in knowledge space theory.
de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio
2015-12-01
Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data. PMID:26651988
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-05-01
On 28 November 2013, comet C/2012 S1 better known as comet ISON should have passed within two solar radii of the Suns surface as it reached perihelion in its orbit. But instead of shining in extreme ultraviolet (EUV) wavelengths as it grazed the solar surface, the comet was never detected by EUV instruments. What happened to comet ISON?Missing EmissionWhen a sungrazing comet passes through the solar corona, it leaves behind a trail of molecules evaporated from its surface. Some of these molecules emit EUV light, which can be detected by instruments on telescopes like the space-based Solar Dynamics Observatory (SDO).Comet ISON, a comet that arrived from deep space and was predicted to graze the Suns corona in November 2013, was expected to cause EUV emission during its close passage. But analysis of the data from multiple telescopes that tracked ISON in EUV including SDO reveals no sign of it at perihelion.In a recent study, Paul Bryans and DeanPesnell, scientists from NCARs High Altitude Observatory and NASA Goddard Space Flight Center, try to determine why ISON didnt display this expected emission.Comparing ISON and LovejoyIn December 2011, another comet dipped into the Suns corona: comet Lovejoy. This image, showingthe orbit Lovejoy took around the Sun, is a composite of SDO images of the pre- and post-perihelion phases of the orbit. Click for a closer look! The dashed part of the curve represents where Lovejoy passed out of view behind the Sun. [Bryans Pesnell 2016]This is not the first time weve watched a sungrazing comet with EUV-detecting telescopes: Comet Lovejoy passed similarly close to the Sun in December 2011. But when Lovejoy grazed the solar corona, it emitted brightly in EUV. So why didnt ISON? Bryans and Pesnell argue that there are two possibilities:the coronal conditions experienced by the two comets were not similar, orthe two comets themselves were not similar.To establish which factor is the most relevant, the authors first demonstrate that both
Missing gene identification using functional coherence scores
Chitale, Meghana; Khan, Ishita K.; Kihara, Daisuke
2016-01-01
Reconstructing metabolic and signaling pathways is an effective way of interpreting a genome sequence. A challenge in a pathway reconstruction is that often genes in a pathway cannot be easily found, reflecting current imperfect information of the target organism. In this work, we developed a new method for finding missing genes, which integrates multiple features, including gene expression, phylogenetic profile, and function association scores. Particularly, for considering function association between candidate genes and neighboring proteins to the target missing gene in the network, we used Co-occurrence Association Score (CAS) and PubMed Association Score (PAS), which are designed for capturing functional coherence of proteins. We showed that adding CAS and PAS substantially improve the accuracy of identifying missing genes in the yeast enzyme-enzyme network compared to the cases when only the conventional features, gene expression, phylogenetic profile, were used. Finally, it was also demonstrated that the accuracy improves by considering indirect neighbors to the target enzyme position in the network using a proper network-topology-based weighting scheme. PMID:27552989
Depression and literacy are important factors for missed appointments.
Miller-Matero, Lisa Renee; Clark, Kalin Burkhardt; Brescacin, Carly; Dubaybo, Hala; Willens, David E
2016-09-01
Multiple variables are related to missed clinic appointments. However, the prevalence of missed appointments is still high suggesting other factors may play a role. The purpose of this study was to investigate the relationship between missed appointments and multiple variables simultaneously across a health care system, including patient demographics, psychiatric symptoms, cognitive functioning and literacy status. Chart reviews were conducted on 147 consecutive patients who were seen by a primary care psychologist over a six month period and completed measures to determine levels of depression, anxiety, sleep, cognitive functioning and health literacy. Demographic information and rates of missed appointments were also collected from charts. The average rate of missed appointments was 15.38%. In univariate analyses, factors related to higher rates of missed appointments included younger age (p = .03), lower income (p = .05), probable depression (p = .05), sleep difficulty (p = .05) and limited reading ability (p = .003). There were trends for a higher rate of missed appointments for patients identifying as black (p = .06), government insurance (p = .06) and limited math ability (p = .06). In a multivariate model, probable depression (p = .02) and limited reading ability (p = .003) were the only independent predictors. Depression and literacy status may be the most important factors associated with missed appointments. Implications are discussed including regular screening for depression and literacy status as well as interventions that can be utilized to help improve the rate of missed appointments. PMID:26695719
Missing: Students' Global Outlook
ERIC Educational Resources Information Center
Alemu, Daniel S.
2010-01-01
While schools are focusing excessively on meeting accountability standards and improving test scores, important facets of schooling--such as preparing students for the global marketplace--are being inadvertently overlooked. Without deliberate informal observation by teachers and school administrators, detecting and addressing students'…
Missed opportunities in child healthcare
Jonker, Linda
2014-01-01
Background Various policies in health, such as Integrated Management of Childhood Illnesses, were introduced to enhance integrated service delivery in child healthcare. During clinical practice the researcher observed that integrated services may not be rendered. Objectives This article describes the experiences of mothers that utilised comprehensive child health services in the Cape Metropolitan area of South Africa. Services included treatment for diseases; preventative interventions such as immunisation; and promotive interventions, such as improvement in nutrition and promotion of breastfeeding. Method A qualitative, descriptive phenomenological approach was applied to explore the experiences and perceptions of mothers and/or carers utilising child healthcare services. Thirty percent of the clinics were selected purposively from the total population. A convenience purposive non-probability sampling method was applied to select 17 mothers who met the criteria and gave written consent. Interviews were conducted and recorded digitally using an interview guide. The data analysis was done using Tesch's eight step model. Results Findings of the study indicated varied experiences. Not all mothers received information about the Road to Health book or card. According to the mothers, integrated child healthcare services were not practised. The consequences were missed opportunities in immunisation, provision of vitamin A, absence of growth monitoring, feeding assessment and provision of nutritional advice. Conclusion There is a need for simple interventions such as oral rehydration, early recognition and treatment of diseases, immunisation, growth monitoring and appropriate nutrition advice. These services were not offered diligently. Such interventions could contribute to reducing the incidence of child morbidity and mortality. PMID:26245404
Schillebeeckx, P.; Becker, B.; Danon, Y.; Guber, K.; Harada, H.; Heyse, J.; Junghans, A.R.; Kopecky, S.; Massimi, C.; Moxon, M.C.; Otuka, N.; Sirakov, I.; Volev, K.
2012-12-15
Cross section data in the resolved and unresolved resonance region are represented by nuclear reaction formalisms using parameters which are determined by fitting them to experimental data. Therefore, the quality of evaluated cross sections in the resonance region strongly depends on the experimental data used in the adjustment process and an assessment of the experimental covariance data is of primary importance in determining the accuracy of evaluated cross section data. In this contribution, uncertainty components of experimental observables resulting from total and reaction cross section experiments are quantified by identifying the metrological parameters involved in the measurement, data reduction and analysis process. In addition, different methods that can be applied to propagate the covariance of the experimental observables (i.e. transmission and reaction yields) to the covariance of the resonance parameters are discussed and compared. The methods being discussed are: conventional uncertainty propagation, Monte Carlo sampling and marginalization. It is demonstrated that the final covariance matrix of the resonance parameters not only strongly depends on the type of experimental observables used in the adjustment process, the experimental conditions and the characteristics of the resonance structure, but also on the method that is used to propagate the covariances. Finally, a special data reduction concept and format is presented, which offers the possibility to store the full covariance information of experimental data in the EXFOR library and provides the information required to perform a full covariance evaluation.
Epigenetic Contribution to Covariance Between Relatives
Tal, Omri; Kisdi, Eva; Jablonka, Eva
2010-01-01
Recent research has pointed to the ubiquity and abundance of between-generation epigenetic inheritance. This research has implications for assessing disease risk and the responses to ecological stresses and also for understanding evolutionary dynamics. An important step toward a general evaluation of these implications is the identification and estimation of the amount of heritable, epigenetic variation in populations. While methods for modeling the phenotypic heritable variance contributed by culture have already been developed, there are no comparable methods for nonbehavioral epigenetic inheritance systems. By introducing a model that takes epigenetic transmissibility (the probability of transmission of ancestral phenotypes) and environmental induction into account, we provide novel expressions for covariances between relatives. We have combined a classical quantitative genetics approach with information about the number of opportunities for epigenetic reset between generations and assumptions about environmental induction to estimate the heritable epigenetic variance and epigenetic transmissibility for both asexual and sexual populations. This assists us in the identification of phenotypes and populations in which epigenetic transmission occurs and enables a preliminary quantification of their transmissibility, which could then be followed by genomewide association and QTL studies. PMID:20100941
Missed nursing care: a qualitative study.
Kalisch, Beatrice J
2006-01-01
The purpose of this study was to determine nursing care regularly missed on medical-surgical units and reasons for missed care. Nine elements of regularly missed nursing care (ambulation, turning, delayed or missed feedings, patient teaching, discharge planning, emotional support, hygiene, intake and output documentation, and surveillance) and 7 themes relative to the reasons for missing this care were reported by nursing staff. PMID:16985399
Covariation bias in panic-prone individuals.
Pauli, P; Montoya, P; Martz, G E
1996-11-01
Covariation estimates between fear-relevant (FR; emergency situations) or fear-irrelevant (FI; mushrooms and nudes) stimuli and an aversive outcome (electrical shock) were examined in 10 high-fear (panic-prone) and 10 low-fear respondents. When the relation between slide category and outcome was random (illusory correlation), only high-fear participants markedly overestimated the contingency between FR slides and shocks. However, when there was a high contingency of shocks following FR stimuli (83%) and a low contingency of shocks following FI stimuli (17%), the group difference vanished. Reversal of contingencies back to random induced a covariation bias for FR slides in high- and low-fear respondents. Results indicate that panic-prone respondents show a covariation bias for FR stimuli and that the experience of a high contingency between FR slides and aversive outcomes may foster such a covariation bias even in low-fear respondents. PMID:8952200
Some thoughts on positive definiteness in the consideration of nuclear data covariance matrices
Geraldo, L.P.; Smith, D.L.
1988-01-01
Some basic mathematical features of covariance matrices are reviewed, particularly as they relate to the property of positive difiniteness. Physical implications of positive definiteness are also discussed. Consideration is given to an examination of the origins of non-positive definite matrices, to procedures which encourage the generation of positive definite matrices and to the testing of covariance matrices for positive definiteness. Attention is also given to certain problems associated with the construction of covariance matrices using information which is obtained from evaluated data files recorded in the ENDF format. Examples are provided to illustrate key points pertaining to each of the topic areas covered.
NASA Astrophysics Data System (ADS)
Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping
2016-05-01
Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.
Mean backscattering properties of random radar targets - A polarimetric covariance matrix concept
NASA Astrophysics Data System (ADS)
Ziegler, V.; Lueneburg, E.; Schroth, A.
A polarimetric covariance matrix concept which describes the polarimetric backscattering features of reciprocal random radar targets is presented. The polarization dependence of second-order radar observables can be obtained by unitary similarity transformations of the covariance matrix. Invariant target parameters, such as the minimum and maximum eigenvalues or the eigenvalue difference of the covariance matrix, are introduced, providing information on the randomness of a target and the polarimetric features of the radar observables. An analytical formulation of the problem of optimal polarizations for the mean copolar and crosspolar power return is derived. As a result, the operational computation of optimal polarizations within large data sets becomes feasible.
Noncommutative Gauge Theory with Covariant Star Product
Zet, G.
2010-08-04
We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.
Covariant action for type IIB supergravity
NASA Astrophysics Data System (ADS)
Sen, Ashoke
2016-07-01
Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.
Covariate analysis of bivariate survival data
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Phase-covariant quantum cloning of qudits
Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin
2003-02-01
We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation.
Autofocusing searches in jets plus missing energy
Englert, Christoph; Plehn, Tilman; Schichtel, Peter; Schumann, Steffen
2011-05-01
Jets plus missing transverse energy is one of the main search channels for new physics at the LHC. A major limitation lies in our understanding of QCD backgrounds. Using jet merging, we can describe the number of jets in typical background channels in terms of a staircase scaling, including theory uncertainties. The scaling parameter depends on the particles in the final state and on cuts applied. Measuring the staircase scaling will allow us to also predict the effective mass for standard model backgrounds. Based on both observables, we propose an analysis strategy avoiding model-specific cuts, which returns information about the color charge and the mass scale of the underlying new physics.
Estimated Environmental Exposures for MISSE-3 and MISSE-4
NASA Technical Reports Server (NTRS)
Pippin, Gary; Normand, Eugene; Finckenor, Miria
2008-01-01
Both modeling techniques and a variety of measurements and observations were used to characterize the environmental conditions experienced by the specimens flown on the MISSE-3 (Materials International Space Station Experiment) and MISSE-4 space flight experiments. On August 3, 2006, astronauts Jeff Williams and Thomas Reiter attached MISSE-3 and -4 to the Quest airlock on ISS, where these experiments were exposed to atomic oxygen (AO), ultraviolet (UV) radiation, particulate radiation, thermal cycling, meteoroid/space debris impact, and the induced environment of an active space station. They had been flown to ISS during the July 2006 STS-121 mission. The two suitcases were oriented so that one side faced the ram direction and one side remained shielded from the atomic oxygen. On August 18,2007, astronauts Clay Anderson and Dave Williams retrieved MISSE-3 and-4 and returned them to Earth at the end of the STS-118 mission. Quantitative values are provided when possible for selected environmental factors. A meteoroid/debris impact survey was performed prior to de-integration at Langley Research Center. AO fluences were calculated based on mass loss and thickness loss of thin polymeric films of known AO reactivity. Radiation was measured with thermoluminescent detectors. Visual inspections under ambient and "black-light" at NASA LaRC, together with optical measurements on selected specimens, were the basis for the initial contamination level assessment.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Cross-Section Covariance Data Processing with the AMPX Module PUFF-IV
Wiarda, Dorothea; Leal, Luiz C; Dunn, Michael E
2011-01-01
The ENDF community is endeavoring to release an updated version of the ENDF/B-VII library (ENDF/B-VII.1). In the new release several new evaluations containing covariance information have been added, as the community strives to add covariance information for use in programs like the TSUNAMI (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation) sequence of SCALE (Ref 1). The ENDF/B formatted files are processed into libraries to be used in transport calculations using the AMPX code system (Ref 2) or the NJOY code system (Ref 3). Both codes contain modules to process covariance matrices: PUFF-IV for AMPX and ERRORR in the case of NJOY. While the cross section processing capability between the two code systems has been widely compared, the same is not true for the covariance processing. This paper compares the results for the two codes using the pre-release version of ENDF/B-VII.1.
Covariance Modifications to Subspace Bases
Harris, D B
2008-11-19
Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the
Implementation of optimal phase-covariant cloning machines
Sciarrino, Fabio; De Martini, Francesco
2007-07-15
The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarization encoded qubits are discussed.
Construction and use of gene expression covariation matrix
Hennetin, Jérôme; Pehkonen, Petri; Bellis, Michel
2009-01-01
. Conclusion This new method, applied to four different large data sets, has allowed us to construct distinct covariation matrices with similar properties. We have also developed a technique to translate these covariation networks into graphical 3D representations and found that the local assignation of the probe sets was conserved across the four chip set models used which encompass three different species (humans, mice, and rats). The application of adapted clustering methods succeeded in delineating six conserved functional regions that we characterized using Gene Ontology information. PMID:19594909
Eddy Covariance Method: Overview of General Guidelines and Conventional Workflow
NASA Astrophysics Data System (ADS)
Burba, G. G.; Anderson, D. J.; Amen, J. L.
2007-12-01
received from new users of the Eddy Covariance method and relevant instrumentation, and employs non-technical language to be of practical use to those new to this field. Information is provided on theory of the method (including state of methodology, basic derivations, practical formulations, major assumptions and sources of errors, error treatment, and use in non- traditional terrains), practical workflow (e.g., experimental design, implementation, data processing, and quality control), alternative methods and applications, and the most frequently overlooked details of the measurements. References and access to an extended 141-page Eddy Covariance Guideline in three electronic formats are also provided.
Montgomery, Cynthia A; Kaufman, Rhonda
2003-03-01
If a dam springs several leaks, there are various ways to respond. One could assiduously plug the holes, for instance. Or one could correct the underlying weaknesses, a more sensible approach. When it comes to corporate governance, for too long we have relied on the first approach. But the causes of many governance problems lie well below the surface--specifically, in critical relationships that are not structured to support the players involved. In other words, the very foundation of the system is flawed. And unless we correct the structural problems, surface changes are unlikely to have a lasting impact. When shareholders, management, and the board of directors work together as a system, they provide a powerful set of checks and balances. But the relationship between shareholders and directors is fraught with weaknesses, undermining the entire system's equilibrium. As the authors explain, the exchange of information between these two players is poor. Directors, though elected by shareholders to serve as their agents, aren't individually accountable to the investors. And shareholders--for a variety of reasons--have failed to exert much influence over boards. In the end, directors are left with the Herculean task of faithfully representing shareholders whose preferences are unclear, and shareholders have little say about who represents them and few mechanisms through which to create change. The authors suggest several ways to improve the relationship between shareholders and directors: Increase board accountability by recording individual directors' votes on key corporate resolutions; separate the positions of chairman and CEO; reinvigorate shareholders; and give boards funding to pay for outside experts who can provide perspective on crucial issues. PMID:12632807
Bjorkland, Ronald
2013-11-15
The U.S. National Environmental Policy Act (NEPA) of 1969 heralded in an era of more robust attention to environmental impacts resulting from larger scale federal projects. The number of other countries that have adopted NEPA's framework is evidence of the appeal of this type of environmental legislation. Mandates to review environmental impacts, identify alternatives, and provide mitigation plans before commencement of the project are at the heart of NEPA. Such project reviews have resulted in the development of a vast number of reports and large volumes of project-specific data that potentially can be used to better understand the components and processes of the natural environment and provide guidance for improved and efficient environmental protection. However, the environmental assessment (EA) or the more robust and intensive environmental impact statement (EIS) that are required for most major projects more frequently than not are developed to satisfy the procedural aspects of the NEPA legislation while they fail to provide the needed guidance for improved decision-making. While NEPA legislation recommends monitoring of project activities, this activity is not mandated, and in those situations where it has been incorporated, the monitoring showed that the EIS was inaccurate in direction and/or magnitude of the impact. Many reviews of NEPA have suggested that monitoring all project phases, from the design through the decommissioning, should be incorporated. Information gathered though a well-developed monitoring program can be managed in databases and benefit not only the specific project but would provide guidance how to better design and implement future activities designed to protect and enhance the natural environment. -- Highlights: • NEPA statutes created profound environmental protection legislative framework. • Contrary to intent, NEPA does not provide for definitive project monitoring. • Robust project monitoring is essential for enhanced
Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models
Roshani, Daem; Ghaderi, Ebrahim
2016-01-01
Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809
A hybrid imputation approach for microarray missing value estimation
2015-01-01
Background Missing data is an inevitable phenomenon in gene expression microarray experiments due to instrument failure or human error. It has a negative impact on performance of downstream analysis. Technically, most existing approaches suffer from this prevalent problem. Imputation is one of the frequently used methods for processing missing data. Actually many developments have been achieved in the research on estimating missing values. The challenging task is how to improve imputation accuracy for data with a large missing rate. Methods In this paper, induced by the thought of collaborative training, we propose a novel hybrid imputation method, called Recursive Mutual Imputation (RMI). Specifically, RMI exploits global correlation information and local structure in the data, captured by two popular methods, Bayesian Principal Component Analysis (BPCA) and Local Least Squares (LLS), respectively. Mutual strategy is implemented by sharing the estimated data sequences at each recursive process. Meanwhile, we consider the imputation sequence based on the number of missing entries in the target gene. Furthermore, a weight based integrated method is utilized in the final assembling step. Results We evaluate RMI with three state-of-art algorithms (BPCA, LLS, Iterated Local Least Squares imputation (ItrLLS)) on four publicly available microarray datasets. Experimental results clearly demonstrate that RMI significantly outperforms comparative methods in terms of Normalized Root Mean Square Error (NRMSE), especially for datasets with large missing rates and less complete genes. Conclusions It is noted that our proposed hybrid imputation approach incorporates both global and local information of microarray genes, which achieves lower NRMSE values against to any single approach only. Besides, this study highlights the need for considering the imputing sequence of missing entries for imputation methods. PMID:26330180
Defining habitat covariates in camera-trap based occupancy studies
Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas
2015-01-01
In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779
Spacetime states and covariant quantum theory
NASA Astrophysics Data System (ADS)
Reisenberger, Michael; Rovelli, Carlo
2002-06-01
In its usual presentation, classical mechanics appears to give time a very special role. But it is well known that mechanics can be formulated so as to treat the time variable on the same footing as the other variables in the extended configuration space. Such covariant formulations are natural for relativistic gravitational systems, where general covariance conflicts with the notion of a preferred physical-time variable. The standard presentation of quantum mechanics, in turn, again gives time a very special role, raising well known difficulties for quantum gravity. Is there a covariant form of (canonical) quantum mechanics? We observe that the preferred role of time in quantum theory is the consequence of an idealization: that measurements are instantaneous. Canonical quantum theory can be given a covariant form by dropping this idealization. States prepared by noninstantaneous measurements are described by ``spacetime smeared states.'' The theory can be formulated in terms of these states, without making any reference to a special time variable. The quantum dynamics is expressed in terms of the propagator, an object covariantly defined on the extended configuration space.
Linkage analysis of anorexia nervosa incorporating behavioral covariates.
Devlin, Bernie; Bacanu, Silviu-Alin; Klump, Kelly L; Bulik, Cynthia M; Fichter, Manfred M; Halmi, Katherine A; Kaplan, Allan S; Strober, Michael; Treasure, Janet; Woodside, D Blake; Berrettini, Wade H; Kaye, Walter H
2002-03-15
Eating disorders, such as anorexia nervosa (AN) and bulimia nervosa (BN), have genetic and environmental underpinnings. To explore genetic contributions to AN, we measured psychiatric, personality and temperament phenotypes of individuals diagnosed with eating disorders from 196 multiplex families, all accessed through an AN proband, as well as genotyping a battery of 387 short tandem repeat (STR) markers distributed across the genome. On these data we performed a multipoint affected sibling pair (ASP) linkage analysis using a novel method that incorporates covariates. By exploring seven attributes thought to typify individuals with eating disorders, we identified two variables, drive-for-thinness and obsessionality, which delimit populations among the ASPs. For both of these traits, or covariates, there were a cluster of ASPs who have high and concordant values for these traits, in keeping with our expectations for individuals with AN, and other clusters of ASPs who did not meet those expectations. When we incorporated these covariates into the ASP linkage analysis, both jointly and separately, we found several regions of suggestive linkage: one close to genome-wide significance on chromosome 1 (at 210 cM, D1S1660; LOD = 3.46, P = 0.00003), another on chromosome 2 (at 114 cM, D2S1790; LOD = 2.22, P = 0.00070) and a third region on chromosome 13 (at 26 cM, D13S894; LOD = 2.50, P = 0.00035). By comparing our results to those implemented using more standard linkage methods, we find the covariates convey substantial information for the linkage analysis. PMID:11912184
FAST NEUTRON COVARIANCES FOR EVALUATED DATA FILES.
HERMAN, M.; OBLOZINSKY, P.; ROCHMAN, D.; KAWANO, T.; LEAL, L.
2006-06-05
We describe implementation of the KALMAN code in the EMPIRE system and present first covariance data generated for Gd and Ir isotopes. A complete set of covariances, in the full energy range, was produced for the chain of 8 Gadolinium isotopes for total, elastic, capture, total inelastic (MT=4), (n,2n), (n,p) and (n,alpha) reactions. Our correlation matrices, based on combination of model calculations and experimental data, are characterized by positive mid-range and negative long-range correlations. They differ from the model-generated covariances that tend to show strong positive long-range correlations and those determined solely from experimental data that result in nearly diagonal matrices. We have studied shapes of correlation matrices obtained in the calculations and interpreted them in terms of the underlying reaction models. An important result of this study is the prediction of narrow energy ranges with extremely small uncertainties for certain reactions (e.g., total and elastic).
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1977-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1975-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Ocean data assimilation with background error covariance derived from OGCM outputs
NASA Astrophysics Data System (ADS)
Fu, Weiwei; Zhou, Guangqing; Wang, Huijun
2004-04-01
The background error covariance plays an important role in modern data assimilation and analysis systems by determining the spatial spreading of information in the data. A novel method based on model output is proposed to estimate background error covariance for use in Optimum Interpolation. At every model level, anisotropic correlation scales are obtained that give a more detailed description of the spatial correlation structure. Furthermore, the impact of the background field itself is included in the background error covariance. The methodology of the estimation is presented and the structure of the covariance is examined. The results of 20-year assimilation experiments are compared with observations from TOGA-TAO (The Tropical Ocean-Global Atmosphere-Tropical Atmosphere Ocean) array and other analysis data.
Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi
2015-01-01
Background. Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients. PMID:26413142
Parametric number covariance in quantum chaotic spectra
NASA Astrophysics Data System (ADS)
Vinayak, Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Covariant Spectator Theory and Hadron Structure
NASA Astrophysics Data System (ADS)
Peña, M. T.; Leitão, Sofia; Biernat, Elmar P.; Stadler, Alfred; Ribeiro, J. E.; Gross, Franz
2016-06-01
We present the first results of a study on meson spectroscopy using a covariant formalism based on the Covariant Spectator Theory. Our approach is derived directly in Minkowski space and it approximates the Bethe-Salpeter equation by taking effectively into account the contributions from both ladder and crossed ladder diagrams in the q{bar{q}} interaction kernel. A general Lorentz structure of the kernel is tested and chiral constraints on the kernel are discussed. Results for the pion form factor are also presented.
A violation of the covariant entropy bound?
NASA Astrophysics Data System (ADS)
Masoumi, Ali; Mathur, Samir D.
2015-04-01
Several arguments suggest that the entropy density at high energy density ρ should be given by the expression s =K √{ρ /G } , where K is a constant of order unity. On the other hand the covariant entropy bound requires that the entropy on a light sheet be bounded by A /4 G , where A is the area of the boundary of the sheet. We find that in a suitably chosen cosmological geometry, the above expression for s violates the covariant entropy bound. We consider different possible explanations for this fact, in particular, the possibility that entropy bounds should be defined in terms of volumes of regions rather than areas of surfaces.
Sparse Multivariate Regression With Covariance Estimation
Rothman, Adam J.; Levina, Elizaveta; Zhu, Ji
2014-01-01
We propose a procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for correlation of the response variables. This method, which we call multivariate regression with covariance estimation (MRCE), involves penalized likelihood with simultaneous estimation of the regression coefficients and the covariance structure. An efficient optimization algorithm and a fast approximation are developed for computing MRCE. Using simulation studies, we show that the proposed method outperforms relevant competitors when the responses are highly correlated. We also apply the new method to a finance example on predicting asset returns. An R-package containing this dataset and code for computing MRCE and its approximation are available online. PMID:24963268
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated. PMID:27078354
Methods for Mediation Analysis with Missing Data
ERIC Educational Resources Information Center
Zhang, Zhiyong; Wang, Lijuan
2013-01-01
Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…
Characteristics of HIV patients who missed their scheduled appointments
Nagata, Delsa; Gutierrez, Eliana Battaggia
2016-01-01
ABSTRACT OBJECTIVE To analyze whether sociodemographic characteristics, consultations and care in special services are associated with scheduled infectious diseases appointments missed by people living with HIV. METHODS This cross-sectional and analytical study included 3,075 people living with HIV who had at least one scheduled appointment with an infectologist at a specialized health unit in 2007. A secondary data base from the Hospital Management & Information System was used. The outcome variable was missing a scheduled medical appointment. The independent variables were sex, age, appointments in specialized and available disciplines, hospitalizations at the Central Institute of the Clinical Hospital at the Faculdade de Medicina of the Universidade de São Paulo, antiretroviral treatment and change of infectologist. Crude and multiple association analysis were performed among the variables, with a statistical significance of p ≤ 0.05. RESULTS More than a third (38.9%) of the patients missed at least one of their scheduled infectious diseases appointments; 70.0% of the patients were male. The rate of missed appointments was 13.9%, albeit with no observed association between sex and absences. Age was inversely associated to missed appointment. Not undertaking anti-retroviral treatment, having unscheduled infectious diseases consultations or social services care and being hospitalized at the Central Institute were directly associated to missed appointments. CONCLUSIONS The Hospital Management & Information System proved to be a useful tool for developing indicators related to the quality of health care of people living with HIV. Other informational systems, which are often developed for administrative purposes, can also be useful for local and regional management and for evaluating the quality of care provided for patients living with HIV. PMID:26786472
Missed Opportunities: But a New Century Is Starting.
ERIC Educational Resources Information Center
Corn, Anne L.
1999-01-01
This article describes critical events that have shaped gifted education, including: closing of one-room schoolhouses, the industrial revolution, the space race, the civil right movement, legislation for special education, growth in technology and information services, educational research, and advocacy. Missed opportunities and future…
What's Missing? Anti-Racist Sex Education!
ERIC Educational Resources Information Center
Whitten, Amanda; Sethna, Christabelle
2014-01-01
Contemporary sexual health curricula in Canada include information about sexual diversity and queer identities, but what remains missing is any explicit discussion of anti-racist sex education. Although there exists federal and provincial support for multiculturalism and anti-racism in schools, contemporary Canadian sex education omits crucial…
Missing Data and Multiple Imputation: An Unbiased Approach
NASA Technical Reports Server (NTRS)
Foy, M.; VanBaalen, M.; Wear, M.; Mendez, C.; Mason, S.; Meyers, V.; Alexander, D.; Law, J.
2014-01-01
The default method of dealing with missing data in statistical analyses is to only use the complete observations (complete case analysis), which can lead to unexpected bias when data do not meet the assumption of missing completely at random (MCAR). For the assumption of MCAR to be met, missingness cannot be related to either the observed or unobserved variables. A less stringent assumption, missing at random (MAR), requires that missingness not be associated with the value of the missing variable itself, but can be associated with the other observed variables. When data are truly MAR as opposed to MCAR, the default complete case analysis method can lead to biased results. There are statistical options available to adjust for data that are MAR, including multiple imputation (MI) which is consistent and efficient at estimating effects. Multiple imputation uses informing variables to determine statistical distributions for each piece of missing data. Then multiple datasets are created by randomly drawing on the distributions for each piece of missing data. Since MI is efficient, only a limited number, usually less than 20, of imputed datasets are required to get stable estimates. Each imputed dataset is analyzed using standard statistical techniques, and then results are combined to get overall estimates of effect. A simulation study will be demonstrated to show the results of using the default complete case analysis, and MI in a linear regression of MCAR and MAR simulated data. Further, MI was successfully applied to the association study of CO2 levels and headaches when initial analysis showed there may be an underlying association between missing CO2 levels and reported headaches. Through MI, we were able to show that there is a strong association between average CO2 levels and the risk of headaches. Each unit increase in CO2 (mmHg) resulted in a doubling in the odds of reported headaches.
A Simulation Study of Missing Data with Multiple Missing X's
ERIC Educational Resources Information Center
Rubright, Jonathan D.; Nandakumar, Ratna; Glutting, Joseph J.
2014-01-01
When exploring missing data techniques in a realistic scenario, the current literature is limited: most studies only consider consequences with data missing on a single variable. This simulation study compares the relative bias of two commonly used missing data techniques when data are missing on more than one variable. Factors varied include type…
Nuclear Forensics Analysis with Missing and Uncertain Data
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
2015-10-05
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained by replacing missing information with constant values.
20 CFR 364.4 - Placement of missing children posters in Board field offices.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Placement of missing children posters in... CHILDREN § 364.4 Placement of missing children posters in Board field offices. (a) Poster content. The... information about that child, which may include a photograph of the child, that will appear on the poster....
20 CFR 364.4 - Placement of missing children posters in Board field offices.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Placement of missing children posters in... CHILDREN § 364.4 Placement of missing children posters in Board field offices. (a) Poster content. The... information about that child, which may include a photograph of the child, that will appear on the poster....
A Primer for Handling Missing Values in the Analysis of Education and Training Data
ERIC Educational Resources Information Center
Gemici, Sinan; Bednarz, Alice; Lim, Patrick
2012-01-01
Quantitative research in vocational education and training (VET) is routinely affected by missing or incomplete information. However, the handling of missing data in published VET research is often sub-optimal, leading to a real risk of generating results that can range from being slightly biased to being plain wrong. Given that the growing…
ERIC Educational Resources Information Center
Köse, Alper
2014-01-01
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
20 CFR 364.4 - Placement of missing children posters in Board field offices.
Code of Federal Regulations, 2010 CFR
2010-04-01
... National Center for Missing and Exploited Children shall select the missing child and the pertinent information about that child, which may include a photograph of the child, that will appear on the poster. The Board will develop a standard format for these posters. (b) Transmission of posters to field...
Covariates of Sesame Street Viewing by Preschoolers.
ERIC Educational Resources Information Center
Spaner, Steven D.
A study was made of nine covariates as to their discriminating power between preschoolers who watch Sesame Street regularly and preschoolers who do not watch Sesame Street, Surveyed were 372 3-4 year old children on 9 variables. The nine variables were: race, socioeconomic status, number of siblings, child's birth order, maternal age, maternal…
Covariant Photon Quantization in the SME
NASA Astrophysics Data System (ADS)
Colladay, D.
2014-01-01
The Gupta-Bleuler quantization procedure is applied to the SME photon sector. A direct application of the method to the massless case fails due to an unavoidable incompleteness in the polarization states. A mass term can be included into the photon lagrangian to rescue the quantization procedure and maintain covariance.
Economical phase-covariant cloning of qudits
Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara
2005-04-01
We derive the optimal N{yields}M phase-covariant quantum cloning for equatorial states in dimension d with M=kd+N, k integer. The cloning maps are optimal for both global and single-qudit fidelity. The map is achieved by an 'economical' cloning machine, which works without ancilla.
Gauge field theory of covariant strings
NASA Astrophysics Data System (ADS)
Kaku, Michio
1986-03-01
We present a gauge covariant second-quantized field theory of strings which is explicitly invariant under the gauge transformations generated by the Virasoro algebra. Unlike the old field theory strings [1] this new formulation is Lorentz covariant as well as gauge covariant under the continuous group Diff( S1) and its central extension. We derive the free action: L=Φ(X) †P[i∂ τ-(L 0-1)]PΦ(X) , in the same way that Feynman derived the Schrödinger equation from the path integral formalism. The action is manifestly invariant under the gauge transformation δΦ(X)= limit∑n=1∞ɛ -nL -nΦ(X) , where P is a projection operator which annihilates spurious states. We give three distinct formulations of this operator P to all orders, the first based on extracting the operator from the functional formulation of the Nambu-Goto action, and the second and third based on inverting the Shapovalov matrix on a Verma module. This gauge covariant formulation can be easily extended to the Green-Schwarz superstring [2,3]. One element application of these methods is to re-express the old Neveu-Schwarz-Ramond model as a field theory which is manifestly invariant under space-time supersymmetric transformations.
Nuclear moments in covariant density functional theory
NASA Astrophysics Data System (ADS)
Meng, J.; Zhao, P. W.; Zhang, S. Q.; Hu, J. N.; Li, J.
2014-05-01
Recent progresses on microscopic and self-consistent description of the nuclear moments in covariant density functional theory based on a point-coupling interaction are briefly reviewed. In particular, the electric quadrupole moments of Cd isotopes and the magnetic moments of Pb isotopes are discussed.
Hawking fluxes, back reaction and covariant anomalies
NASA Astrophysics Data System (ADS)
Kulkarni, Shailesh
2008-11-01
Starting from the chiral covariant effective action approach of Banerjee and Kulkarni (2008 Phys. Lett. B 659 827), we provide a derivation of the Hawking radiation from a charged black hole in the presence of gravitational back reaction. The modified expressions for charge and energy flux, due to the effect of one-loop back reaction are obtained.
A Covariance NMR Toolbox for MATLAB and OCTAVE
NASA Astrophysics Data System (ADS)
Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David
2011-03-01
The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.
A covariance NMR toolbox for MATLAB and OCTAVE.
Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David
2011-03-01
The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. PMID:21215669
Genomic Variants Revealed by Invariably Missing Genotypes in Nelore Cattle
da Silva, Joaquim Manoel; Giachetto, Poliana Fernanda; da Silva, Luiz Otávio Campos; Cintra, Leandro Carrijo; Paiva, Samuel Rezende; Caetano, Alexandre Rodrigues; Yamagishi, Michel Eduardo Beleza
2015-01-01
High density genotyping panels have been used in a wide range of applications. From population genetics to genome-wide association studies, this technology still offers the lowest cost and the most consistent solution for generating SNP data. However, in spite of the application, part of the generated data is always discarded from final datasets based on quality control criteria used to remove unreliable markers. Some discarded data consists of markers that failed to generate genotypes, labeled as missing genotypes. A subset of missing genotypes that occur in the whole population under study may be caused by technical issues but can also be explained by the presence of genomic variations that are in the vicinity of the assayed SNP and that prevent genotyping probes from annealing. The latter case may contain relevant information because these missing genotypes might be used to identify population-specific genomic variants. In order to assess which case is more prevalent, we used Illumina HD Bovine chip genotypes from 1,709 Nelore (Bos indicus) samples. We found 3,200 missing genotypes among the whole population. NGS re-sequencing data from 8 sires were used to verify the presence of genomic variations within their flanking regions in 81.56% of these missing genotypes. Furthermore, we discovered 3,300 novel SNPs/Indels, 31% of which are located in genes that may affect traits of importance for the genetic improvement of cattle production. PMID:26305794
Patient-reported missed nursing care correlated with adverse events.
Kalisch, Beatrice J; Xie, Boqin; Dabney, Beverly Waller
2014-01-01
The aim of this study was to determine the extent and type of missed nursing care as reported by patients and the association with patient-reported adverse outcomes. A total of 729 inpatients on 20 units in 2 acute care hospitals were surveyed. The MISSCARE Survey-Patient was used to collect patient reports of missed care. Patients reported more missed nursing care in the domain of basic care (2.29 ± 1.06) than in communication (1.69 ± 0.71) and in time to respond (1.52 ± 0.64). The 5 most frequently reported elements of missed nursing care were the following: (a) mouth care (50.3%), (b) ambulation (41.3%), (c) getting out of bed into a chair (38.8%), (d) providing information about tests/procedures (27%), and (e) bathing (26.4%). Patients who reported skin breakdown/pressure ulcers, medication errors, new infections, IVs running dry, IVs infiltrating, and other problems during the current hospitalization reported significantly more overall missed nursing care. PMID:24006031
Covariance modeling in geodetic applications of collocation
NASA Astrophysics Data System (ADS)
Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko
2014-05-01
Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.
Das, Kiranmoy; Daniels, Michael J
2014-03-01
Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline. PMID:24400941
Predicting missing links via correlation between nodes
NASA Astrophysics Data System (ADS)
Liao, Hao; Zeng, An; Zhang, Yi-Cheng
2015-10-01
As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.
What we can learn about interiors of Mars with "MISS"
NASA Astrophysics Data System (ADS)
Gudkova, T.; Lognonné, P.; Zharkov, V. N.; Raevskiy, S.; Soloviev, V.
2013-09-01
Interior structure models of Mars are based on geochemical knowledge, experimental data on the behavior of material at high pressure and high temperature, and information on gravitational field of the planet. But there are not yet enough data to constrain the velocity and density distributions well. The experiment MISS (Mars Interior Structure by Seismology) can provide some information on subsurface structure and average global structure of the planet.
Shortreed, Susan M; Forbes, Andrew B
2010-02-20
Missing data are common in longitudinal studies and can occur in the exposure interest. There has been little work assessing the impact of missing data in marginal structural models (MSMs), which are used to estimate the effect of an exposure history on an outcome when time-dependent confounding is present. We design a series of simulations based on the Framingham Heart Study data set to investigate the impact of missing data in the primary exposure of interest in a complex, realistic setting. We use a standard application of MSMs to estimate the causal odds ratio of a specific activity history on outcome. We report and discuss the results of four missing data methods, under seven possible missing data structures, including scenarios in which an unmeasured variable predicts missing information. In all missing data structures, we found that a complete case analysis, where all subjects with missing exposure data are removed from the analysis, provided the least bias. An analysis that censored individuals at the first occasion of missing exposure and includes a censorship model as well as a propensity model when creating the inverse probability weights also performed well. The presence of an unmeasured predictor of missing data only slightly increased bias, except in the situation such that the exposure had a large impact on missing data and the unmeasured variable had a large impact on missing data and outcome. A discussion of the results is provided using causal diagrams, showing the usefulness of drawing such diagrams before conducting an analysis. PMID:20025082
Construction of Covariance Functions with Variable Length Fields
NASA Technical Reports Server (NTRS)
Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven
2005-01-01
This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.
Direct Neutron Capture Calculations with Covariant Density Functional Theory Inputs
NASA Astrophysics Data System (ADS)
Zhang, Shi-Sheng; Peng, Jin-Peng; Smith, Michael S.; Arbanas, Goran; Kozub, Ray L.
2014-09-01
Predictions of direct neutron capture are of vital importance for simulations of nucleosynthesis in supernovae, merging neutron stars, and other astrophysical environments. We calculate the direct capture cross sections for E1 transitions using nuclear structure information from a covariant density functional theory as input for the FRESCO coupled-channels reaction code. We find good agreement of our predictions with experimental cross section data on the double closed-shell targets 16O, 48Ca, and 90Zr, and the exotic nucleus 36S. Extensions of the technique for unstable nuclei and for large-scale calculations will be discussed. Predictions of direct neutron capture are of vital importance for simulations of nucleosynthesis in supernovae, merging neutron stars, and other astrophysical environments. We calculate the direct capture cross sections for E1 transitions using nuclear structure information from a covariant density functional theory as input for the FRESCO coupled-channels reaction code. We find good agreement of our predictions with experimental cross section data on the double closed-shell targets 16O, 48Ca, and 90Zr, and the exotic nucleus 36S. Extensions of the technique for unstable nuclei and for large-scale calculations will be discussed. Supported by the U.S. Dept. of Energy, Office of Nuclear Physics.
A Covariance Generation Methodology for Fission Product Yields
NASA Astrophysics Data System (ADS)
Terranova, N.; Serot, O.; Archier, P.; Vallet, V.; De Saint Jean, C.; Sumini, M.
2016-03-01
Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1) no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM) implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation), developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.
NASA Astrophysics Data System (ADS)
Liu, F.; Zhu, A.; Zhang, G.; Geng, X.; Fraser, W.; Zhao, Y.
2011-12-01
Information on soil spatial variation is critical for environmental modelling. Based on soil-landscape relationship theory, easily observed soil forming environmental factors such as landform and vegetation are frequently utilized to infer soil variation which is difficult to measure. In low relief areas such as plains, however, this would be problematic due to the inability of easily obtained environmental information in reflecting soil variation. How to develop new environmental covariates for digital soil mapping under these situations remains a challenge. This paper presents an approach to developing new environmental covariates and applying them to soil texture mapping over such areas. For the development of the covariates, temporal responses of the land surface to a rainfall event (dynamic feedbacks) were captured daily from MODIS (Moderate Resolution Imaging Spectroradiometer) images over a short period after a major rain event. Then, a set of environmental covariates was constructed from land surface dynamic feedbacks using feature extraction techniques including two dimensional discrete wavelet analysis and principle component analysis. In order to apply the covariates to map soil texture, we derived environmental classes and their fuzzy membership distributions from the covariates using fuzzy c-means clustering. Typical soil texture values of the environmental classes were then obtained through a spatial overlay between the membership distributions and a dataset of soil sampling points. Based on the membership distributions and typical soil texture values of the environmental classes, spatial variation of soil texture was predicted through a linearly weighted averaging function. The approach was applied to develop new environmental covariates and then use them to produce soil texture maps in a low relief area located in south Manitoba, Canada. A total of 51 field soil sample sites were used to evaluate the developed environmental covariates. The results
Covariance and the hierarchy of frame bundles
NASA Technical Reports Server (NTRS)
Estabrook, Frank B.
1987-01-01
This is an essay on the general concept of covariance, and its connection with the structure of the nested set of higher frame bundles over a differentiable manifold. Examples of covariant geometric objects include not only linear tensor fields, densities and forms, but affinity fields, sectors and sector forms, higher order frame fields, etc., often having nonlinear transformation rules and Lie derivatives. The intrinsic, or invariant, sets of forms that arise on frame bundles satisfy the graded Cartan-Maurer structure equations of an infinite Lie algebra. Reduction of these gives invariant structure equations for Lie pseudogroups, and for G-structures of various orders. Some new results are introduced for prolongation of structure equations, and for treatment of Riemannian geometry with higher-order moving frames. The use of invariant form equations for nonlinear field physics is implicitly advocated.
On covariance structure in noisy, big data
NASA Astrophysics Data System (ADS)
Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.
2013-09-01
Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.
Covariant quantum mechanics applied to noncommutative geometry
NASA Astrophysics Data System (ADS)
Astuti, Valerio
2015-08-01
We here report a result obtained in collaboration with Giovanni Amelino-Camelia, first shown in the paper [1]. Applying the manifestly covariant formalism of quantum mechanics to the much studied Snyder spacetime [2] we show how it is trivial in every physical observables, this meaning that every measure in this spacetime gives the same results that would be obtained in the flat Minkowski spacetime.
Economical phase-covariant cloning with multiclones
NASA Astrophysics Data System (ADS)
Zhang, Wen-Hai; Ye, Liu
2009-09-01
This paper presents a very simple method to derive the explicit transformations of the optimal economical 1 to M phase-covariant cloning. The fidelity of clones reaches the theoretic bound [D'Ariano G M and Macchiavello C 2003 Phys. Rev. A 67 042306]. The derived transformations cover the previous contributions [Delgado Y, Lamata L et al., 2007 Phys. Rev. Lett. 98 150502] in which M must be odd.
Covariance expressions for eigenvalue and eigenvector problems
NASA Astrophysics Data System (ADS)
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
Part Marking and Identification Materials on MISSE
NASA Technical Reports Server (NTRS)
Finckenor, Miria M.; Roxby, Donald L.
2008-01-01
Many different spacecraft materials were flown as part of the Materials on International Space Station Experiment (MISSE), including several materials used in part marking and identification. The experiment contained Data Matrix symbols applied using laser bonding, vacuum arc vapor deposition, gas assisted laser etch, chemical etch, mechanical dot peening, laser shot peening, and laser induced surface improvement. The effects of ultraviolet radiation on nickel acetate seal versus hot water seal on sulfuric acid anodized aluminum are discussed. These samples were exposed on the International Space Station to the low Earth orbital environment of atomic oxygen, ultraviolet radiation, thermal cycling, and hard vacuum, though atomic oxygen exposure was very limited for some samples. Results from the one-year exposure on MISSE-3 and MISSE-4 are compared to those from MISSE-1 and MISSE-2, which were exposed for four years. Part marking and identification materials on the current MISSE -6 experiment are also discussed.
Are all biases missing data problems?
Howe, Chanelle J.; Cain, Lauren E.; Hogan, Joseph W.
2015-01-01
Estimating causal effects is a frequent goal of epidemiologic studies. Traditionally, there have been three established systematic threats to consistent estimation of causal effects. These three threats are bias due to confounders, selection, and measurement error. Confounding, selection, and measurement bias have typically been characterized as distinct types of biases. However, each of these biases can also be characterized as missing data problems that can be addressed with missing data solutions. Here we describe how the aforementioned systematic threats arise from missing data as well as review methods and their related assumptions for reducing each bias type. We also link the assumptions made by the reviewed methods to the missing completely at random (MCAR) and missing at random (MAR) assumptions made in the missing data framework that allow for valid inferences to be made based on the observed, incomplete data. PMID:26576336
Using Covariance Analysis to Assess Pointing Performance
NASA Technical Reports Server (NTRS)
Bayard, David; Kang, Bryan
2009-01-01
A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.
Shrinkage covariance matrix approach for microarray data
NASA Astrophysics Data System (ADS)
Karjanto, Suryaefiza; Aripin, Rasimah
2013-04-01
Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n < p. This leads to a biased estimate of the covariance matrix. In this study, the Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.
ANALYSIS OF COVARIANCE WITH SPATIALLY CORRELATED SECONDARY VARIABLES
Technology Transfer Automated Retrieval System (TEKTRAN)
Data sets which contain measurements on a spatially referenced response and covariate are analyzed using either co-kriging or spatial analysis of covariance. While co-kriging accounts for the correlation structure of the covariate, it is purely a predictive tool. Alternatively, spatial analysis of c...
Covariate Selection in Propensity Scores Using Outcome Proxies
ERIC Educational Resources Information Center
Kelcey, Ben
2011-01-01
This study examined the practical problem of covariate selection in propensity scores (PSs) given a predetermined set of covariates. Because the bias reduction capacity of a confounding covariate is proportional to the concurrent relationships it has with the outcome and treatment, particular focus is set on how we might approximate…
Eliciting Systematic Rule Use in Covariation Judgment [the Early Years].
ERIC Educational Resources Information Center
Shaklee, Harriet; Paszek, Donald
Related research suggests that children may show some simple understanding of event covariations by the early elementary school years. The present experiments use a rule analysis methodology to investigate covariation judgments of children in this age range. In Experiment 1, children in second, third and fourth grade judged covariations on 12…
Covariant Perturbation Expansion of Off-Diagonal Heat Kernel
NASA Astrophysics Data System (ADS)
Gou, Yu-Zi; Li, Wen-Du; Zhang, Ping; Dai, Wu-Sheng
2016-07-01
Covariant perturbation expansion is an important method in quantum field theory. In this paper an expansion up to arbitrary order for off-diagonal heat kernels in flat space based on the covariant perturbation expansion is given. In literature, only diagonal heat kernels are calculated based on the covariant perturbation expansion.
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments
ERIC Educational Resources Information Center
Barker, Lynne A.; Andrade, Jackie
2006-01-01
In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…
Covariant balance laws in continua with microstructure
NASA Astrophysics Data System (ADS)
Yavari, Arash; Marsden, Jerrold E.
2009-02-01
The purpose of this paper is to extend the Green-Naghdi-Rivlin balance of energy method to continua with microstructure. The key idea is to replace the group of Galilean transformations with the group of diffeomorphisms of the ambient space. A key advantage is that one obtains in a natural way all the needed balance laws on both the macro and micro levels along with two Doyle-Erickson formulas. We model a structured continuum as a triplet of Riemannian manifolds: a material manifold, the ambient space manifold of material particles and a director field manifold. The Green-Naghdi-Rivlin theorem and its extensions for structured continua are critically reviewed. We show that when the ambient space is Euclidean and when the microstructure manifold is the tangent space of the ambient space manifold, postulating a single balance of energy law and its invariance under time-dependent isometries of the ambient space, one obtains conservation of mass, balances of linear and angular momenta but not a separate balance of linear momentum. We develop a covariant elasticity theory for structured continua by postulating that energy balance is invariant under time-dependent spatial diffeomorphisms of the ambient space, which in this case is the product of two Riemannian manifolds. We then introduce two types of constrained continua in which microstructure manifold is linked to the reference and ambient space manifolds. In the case when at every material point, the microstructure manifold is the tangent space of the ambient space manifold at the image of the material point, we show that the assumption of covariance leads to balances of linear and angular momenta with contributions from both forces and micro-forces along with two Doyle-Ericksen formulas. We show that generalized covariance leads to two balances of linear momentum and a single coupled balance of angular momentum. Using this theory, we covariantly obtain the balance laws for two specific examples, namely elastic
ERIC Educational Resources Information Center
Sweller, Naomi
2015-01-01
Individuals with autism have difficulty generalising information from one situation to another, a process that requires the learning of categories and concepts. Category information may be learned through: (1) classifying items into categories, or (2) predicting missing features of category items. Predicting missing features has to this point been…
Quantum energy inequalities and local covariance II: categorical formulation
NASA Astrophysics Data System (ADS)
Fewster, Christopher J.
2007-11-01
We formulate quantum energy inequalities (QEIs) in the framework of locally covariant quantum field theory developed by Brunetti, Fredenhagen and Verch, which is based on notions taken from category theory. This leads to a new viewpoint on the QEIs, and also to the identification of a new structural property of locally covariant quantum field theory, which we call local physical equivalence. Covariant formulations of the numerical range and spectrum of locally covariant fields are given and investigated, and a new algebra of fields is identified, in which fields are treated independently of their realisation on particular spacetimes and manifestly covariant versions of the functional calculus may be formulated.
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Eddy covariance for quantifying trace gas fluxes from soils
NASA Astrophysics Data System (ADS)
Eugster, W.; Merbold, L.
2015-02-01
Soils are highly complex physical and biological systems, and hence measuring soil gas exchange fluxes with high accuracy and adequate spatial representativity remains a challenge. A technique which has become increasingly popular is the eddy covariance (EC) method. This method takes advantage of the fact that surface fluxes are mixed into the near-surface atmosphere via turbulence. As a consequence, measurements with an EC system can be done at some distance above the surface, providing accurate and spatially integrated flux density estimates. In this paper we provide a basic overview targeting scientists who are not familiar with the EC method. This review gives examples of successful deployments from a wide variety of ecosystems. The primary focus is on the three major greenhouse gases: carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). Several limitations to the application of EC systems exist, requiring a careful experimental design, which we discuss in detail. Thereby we group these experiments into two main classes: (1) manipulative experiments, and (2) survey-type experiments. Recommendations and examples of successful studies using various approaches are given, including the combination of EC flux measurements with online measurements of stable isotopes. We conclude that EC should not be considered a substitute to traditional (e.g., chamber based) flux measurements but instead an addition to them. The greatest strength of EC measurements in soil science are (1) their uninterrupted continuous measurement of gas concentrations and fluxes that can also capture short-term bursts of fluxes that easily could be missed by other methods and (2) the spatial integration covering the ecosystem scale (several square meters to hectares), thereby integrating over small-scale heterogeneity in the soil.
Eddy covariance for quantifying trace gas fluxes from soils
NASA Astrophysics Data System (ADS)
Eugster, W.; Merbold, L.
2014-10-01
Soils are highly complex physical and biological systems, and hence measuring soil gas exchange fluxes with high accuracy and adequate spatial representativity remains a challenge. A technique which has become increasingly popular is the eddy covariance (EC) method. This method takes advantage of the fact that surface fluxes are mixed into the near-surface atmosphere via turbulence. As a consequence, measurement with an EC system can be done at some distance above the surface, providing accurate and spatially integrated flux density estimates. In this paper we provide a basic overview targeting at scientists who are not familiar with the EC method. This reviews gives examples of successful deployments from a wide variety of ecosystems. The primary focus is on the three major greenhouse gases carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). Several limitations to the application of EC systems exist, requiring a careful experimental design, which we discuss in detail. Thereby we group these experiments into two main classes: (1) manipulative experiments, and (2) survey-type experiments. Recommendations and examples of successful studies using various approaches, including the combination of EC flux measurements with online measurements of stable isotopes are given. We conclude that EC should not be considered a substitution of traditional flux measurements, but an addition to the latter. The greatest strength of EC measurements in soil science are (1) their uninterrupted continuous measurement of gas concentrations and fluxes that also can capture short-term bursts of fluxes that easily could be missed by other methods; and (2) the spatial integration covering the ecosystem scale (several m2 to ha), thereby integrating over small-scale heterogeneity in the soil.
Realization of a universal and phase-covariant quantum cloning machine in separate cavities
Fang Baolong; Song Qingming; Ye Liu
2011-04-15
We present a scheme to realize a special quantum cloning machine in separate cavities. The quantum cloning machine can copy the quantum information from a photon pulse to two distant atoms. Choosing the different parameters, the method can perform optimal symmetric (asymmetric) universal quantum cloning and optimal symmetric (asymmetric) phase-covariant cloning.
Student versus Faculty Perceptions of Missing Class.
ERIC Educational Resources Information Center
Sleigh, Merry J.; Ritzer, Darren R.; Casey, Michael B.
2002-01-01
Examines and compares student and faculty attitudes towards students missing classes and class attendance. Surveys undergraduate students (n=231) in lower and upper level psychology courses and psychology faculty. Reports that students found more reasons acceptable for missing classes and that the amount of in-class material on the examinations…
Modeling Nonignorable Missing Data in Speeded Tests
ERIC Educational Resources Information Center
Glas, Cees A. W.; Pimentel, Jonald L.
2008-01-01
In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IRT) models: one for the observed response data and…
Methods for Handling Missing Secondary Respondent Data
ERIC Educational Resources Information Center
Young, Rebekah; Johnson, David
2013-01-01
Secondary respondent data are underutilized because researchers avoid using these data in the presence of substantial missing data. The authors reviewed, evaluated, and tested solutions to this problem. Five strategies of dealing with missing partner data were reviewed: (a) complete case analysis, (b) inverse probability weighting, (c) correction…
Infilling missing hydrological data - methods and consequences
NASA Astrophysics Data System (ADS)
Bardossy, A.; Pegram, G. G.
2013-12-01
Hydrological observations are often incomplete - equipment malfunction, transmission errors and other technical problems lead to unwanted gaps in observation time series. Furthermore, due to financial and organizational problems, many observation networks are in continuous decline. As an ameliorating stratagem, short time gaps can be filled using information from other locations. The statistics of abandoned stations provide useful information for the process of extending records. In this contribution the authors present different methods for infilling gaps using: - nearest neighbours - simple and multiple linear regression - black box methods (fuzzy and neural nets) - Expectation Maximization - Copula based estimation The methods are used at different time scales for infilling precipitation from daily through pentads and months to years. The copula based estimation provides not only an estimator for the expected value, but also a probability distribution for each of the missing values. Thus the method can be used for conditional simulation of realizations. Observed precipitation data from the Cape region in South Africa are used to illustrate the intercomparison of the methodologies. The consequences of using [or not using] infilling and data extension are illustrated using a hydrological modelling example from South-West Germany.
Covariant constraints in ghost free massive gravity
Deffayet, C.; Mourad, J.; Zahariade, G. E-mail: mourad@apc.univ-paris7.fr
2013-01-01
We show that the reformulation of the de Rham-Gabadadze-Tolley massive gravity theory using vielbeins leads to a very simple and covariant way to count constraints, and hence degrees of freedom. Our method singles out a subset of theories, in the de Rham-Gabadadze-Tolley family, where an extra constraint, needed to eliminate the Boulware Deser ghost, is easily seen to appear. As a side result, we also introduce a new method, different from the Stuckelberg trick, to extract kinetic terms for the polarizations propagating in addition to those of the massless graviton.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2012-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Covariant harmonic oscillators and coupled harmonic oscillators
NASA Technical Reports Server (NTRS)
Han, Daesoo; Kim, Young S.; Noz, Marilyn E.
1995-01-01
It is shown that the system of two coupled harmonic oscillators shares the basic symmetry properties with the covariant harmonic oscillator formalism which provides a concise description of the basic features of relativistic hadronic features observed in high-energy laboratories. It is shown also that the coupled oscillator system has the SL(4,r) symmetry in classical mechanics, while the present formulation of quantum mechanics can accommodate only the Sp(4,r) portion of the SL(4,r) symmetry. The possible role of the SL(4,r) symmetry in quantum mechanics is discussed.
Inferring Meta-covariates in Classification
NASA Astrophysics Data System (ADS)
Harris, Keith; McMillan, Lisa; Girolami, Mark
This paper develops an alternative method for gene selection that combines model based clustering and binary classification. By averaging the covariates within the clusters obtained from model based clustering, we define “meta-covariates” and use them to build a probit regression model, thereby selecting clusters of similarly behaving genes, aiding interpretation. This simultaneous learning task is accomplished by an EM algorithm that optimises a single likelihood function which rewards good performance at both classification and clustering. We explore the performance of our methodology on a well known leukaemia dataset and use the Gene Ontology to interpret our results.
Cosmology of a Covariant Galileon Field
NASA Astrophysics Data System (ADS)
de Felice, Antonio; Tsujikawa, Shinji
2010-09-01
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
Missing observations in multiyear rotation sampling designs
NASA Technical Reports Server (NTRS)
Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)
1982-01-01
Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.
NASA Astrophysics Data System (ADS)
Shu, Huisheng; Zhang, Sijing; Shen, Bo; Liu, Yurong
2016-07-01
This paper is concerned with the problem of simultaneous input and state estimation for a class of linear discrete-time systems with missing measurements and correlated noises. The missing measurements occur in a random way and are governed by a series of mutually independent random variables obeying a certain Bernoulli distribution. The process and measurement noises under consideration are correlated at the same time instant. Our attention is focused on the design of recursive estimators for both input and state such that, for all missing measurements and correlated noises, the estimators are unbiased and the estimation error covariances are minimized. This objective is achieved using direct algebraic operation and the design algorithm for the desired estimators is given. Finally, an illustrative example is presented to demonstrate the effectiveness of the proposed design scheme.
Semiparametric Estimation of Treatment Effect in a Pretest–Posttest Study with Missing Data
Davidian, Marie; Tsiatis, Anastasios A.; Leon, Selene
2008-01-01
The pretest–posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest–posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates. PMID:19081743
Semiparametric Estimation of Treatment Effect in a Pretest-Posttest Study with Missing Data.
Davidian, Marie; Tsiatis, Anastasios A; Leon, Selene
2005-08-01
The pretest-posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest-posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates. PMID:19081743
Do goldfish miss the fundamental?
NASA Astrophysics Data System (ADS)
Fay, Richard R.
2003-10-01
The perception of harmonic complexes was studied in goldfish using classical respiratory conditioning and a stimulus generalization paradigm. Groups of animals were initially conditioned to several harmonic complexes with a fundamental frequency (f0) of 100 Hz. ln some cases the f0 component was present, and in other cases, the f0 component was absent. After conditioning, animals were tested for generalization to novel harmonic complexes having different f0's, some with f0 present and some with f0 absent. Generalization gradients always peaked at 100 Hz, indicating that the pitch value of the conditioning complexes was consistent with the f0, whether or not f0 was present in the conditioning or test complexes. Thus, goldfish do not miss the fundmental with respect to a pitch-like perceptual dimension. However, generalization gradients tended to have different skirt slopes for the f0-present and f0-absent conditioning and test stimuli. This suggests that goldfish distinguish between f0 present/absent stimuli, probably on the basis of a timbre-like perceptual dimension. These and other results demonstrate that goldfish respond to complex sounds as if they possessed perceptual dimensions similar to pitch and timbre as defined for human and other vertebrate listeners. [Work supported by NIH/NIDCD.
Yao, Hui; Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin
2015-01-01
Summary Multivariate meta-regression models are commonly used in settings where the response variable is naturally multi-dimensional. Such settings are common in cardiovascular and diabetes studies where the goal is to study cholesterol levels once a certain medication is given. In this setting, the natural multivariate endpoint is Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). In this paper, we examine study level (aggregate) multivariate meta-data from 26 Merck sponsored double-blind, randomized, active or placebo-controlled clinical trials on adult patients with primary hypercholesterolemia. Our goal is to develop a methodology for carrying out Bayesian inference for multivariate meta-regression models with study level data when the within-study sample covariance matrix S for the multivariate response data is partially observed. Specifically, the proposed methodology is based on postulating a multivariate random effects regression model with an unknown within-study covariance matrix Σ in which we treat the within-study sample correlations as missing data, the standard deviations of the within-study sample covariance matrix S are assumed observed, and given Σ, S follows a Wishart distribution. Thus, we treat the off-diagonal elements of S as missing data, and these missing elements are sampled from the appropriate full conditional distribution in a Markov chain Monte Carlo (MCMC) sampling scheme via a novel transformation based on partial correlations. We further propose several structures (models) for Σ, which allow for borrowing strength across different treatment arms and trials. The proposed methodology is assessed using simulated as well as real data, and the results are shown to be quite promising. PMID:26257452
Comparative Analysis of Evapotranspiration Using Eddy Covariance
NASA Astrophysics Data System (ADS)
BAE, H.; Ji, H.; Lee, B.; Nam, K.; Jang, B.; Lee, C.; Jung, H.
2013-12-01
The eddy covariance method has been widely used to quantify evapotranspiration. However, independent measurements of energy components such as latent heat flux, sensible heat flux often lead to under-measurements, this is commonly known as a lack of closure of the surface energy balance. In response to this methodological problem, this study is addressed specifically to correction of the latent and heat sensible fluxes. The energy components observed in agricultural and grassland from January 2013 were measured using the eddy covariance method. As a result of the comparison of the available energy (Rn-G) with the sum of the latent and sensible heat fluxes, R-Squared values were 0.72 in the agricultural land, 0.78 in the grassland, indicating that the latent and sensible heat fluxes were under-measured. The obtained latent and sensible heat fluxes were then modified using the Bowen-ratio closure method. After this correction process, the values of the sum of the latent and sensible heat fluxes have increased by 39.7 percent in the agricultural land, 32.2 percent in the grassland respectively. Evapotranspiration will be calculated with both the unmodified and modified latent heat flux values, the results will be then thoroughly compared. The results will be finally verified by comparison with evapotranspiration obtained from energy balance based model.
Noisy covariance matrices and portfolio optimization
NASA Astrophysics Data System (ADS)
Pafka, S.; Kondor, I.
2002-05-01
According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.
Estimating the power spectrum covariance matrix with fewer mock samples
NASA Astrophysics Data System (ADS)
Pearson, David W.; Samushia, Lado
2016-03-01
The covariance matrices of power-spectrum (P(k)) measurements from galaxy surveys are difficult to compute theoretically. The current best practice is to estimate covariance matrices by computing a sample covariance of a large number of mock catalogues. The next generation of galaxy surveys will require thousands of large volume mocks to determine the covariance matrices to desired accuracy. The errors in the inverse covariance matrix are larger and scale with the number of P(k) bins, making the problem even more acute. We develop a method of estimating covariance matrices using a theoretically justified, few-parameter model, calibrated with mock catalogues. Using a set of 600 BOSS DR11 mock catalogues, we show that a seven parameter model is sufficient to fit the covariance matrix of BOSS DR11 P(k) measurements. The covariance computed with this method is better than the sample covariance at any number of mocks and only ˜100 mocks are required for it to fully converge and the inverse covariance matrix converges at the same rate. This method should work equally well for the next generation of galaxy surveys, although a demand for higher accuracy may require adding extra parameters to the fitting function.
Unsupervised segmentation of polarimetric SAR data using the covariance matrix
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Chellappa, Rama; Dubois, Pascale C.
1992-01-01
A method for unsupervised segmentation of polarimetric synthetic aperture radar (SAR) data into classes of homogeneous microwave polarimetric backscatter characteristics is presented. Classes of polarimetric backscatter are selected on the basis of a multidimensional fuzzy clustering of the logarithm of the parameters composing the polarimetric covariance matrix. The clustering procedure uses both polarimetric amplitude and phase information, is adapted to the presence of image speckle, and does not require an arbitrary weighting of the different polarimetric channels; it also provides a partitioning of each data sample used for clustering into multiple clusters. Given the classes of polarimetric backscatter, the entire image is classified using a maximum a posteriori polarimetric classifier. Four-look polarimetric SAR complex data of lava flows and of sea ice acquired by the NASA/JPL airborne polarimetric radar (AIRSAR) are segmented using this technique. The results are discussed and compared with those obtained using supervised techniques.
A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu
2007-01-01
Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…
Toward a Mexican eddy covariance network for carbon cycle science
NASA Astrophysics Data System (ADS)
Vargas, Rodrigo; Yépez, Enrico A.
2011-09-01
First Annual MexFlux Principal Investigators Meeting; Hermosillo, Sonora, Mexico, 4-8 May 2011; The carbon cycle science community has organized a global network, called FLUXNET, to measure the exchange of energy, water, and carbon dioxide (CO2) between the ecosystems and the atmosphere using the eddy covariance technique. This network has provided unprecedented information for carbon cycle science and global climate change but is mostly represented by study sites in the United States and Europe. Thus, there is an important gap in measurements and understanding of ecosystem dynamics in other regions of the world that are seeing a rapid change in land use. Researchers met under the sponsorship of Red Temática de Ecosistemas and Consejo Nacional de Ciencia y Tecnologia (CONACYT) to discuss strategies to establish a Mexican eddy covariance network (MexFlux) by identifying researchers, study sites, and scientific goals. During the meeting, attendees noted that 10 study sites have been established in Mexico with more than 30 combined years of information. Study sites span from new sites installed during 2011 to others with 9 to 6 years of measurements. Sites with the longest span measurements are located in Baja California Sur (established by Walter Oechel in 2002) and Sonora (established by Christopher Watts in 2005); both are semiarid ecosystems. MexFlux sites represent a variety of ecosystem types, including Mediterranean and sarcocaulescent shrublands in Baja California; oak woodland, subtropical shrubland, tropical dry forest, and a grassland in Sonora; tropical dry forests in Jalisco and Yucatan; a managed grassland in San Luis Potosi; and a managed pine forest in Hidalgo. Sites are maintained with an individual researcher's funds from Mexican government agencies (e.g., CONACYT) and international collaborations, but no coordinated funding exists for a long-term program.
Acquiring observation error covariance information for land data assimilation systems
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent work has presented the initial application of adaptive filtering techniques to land surface data assimilation systems. Such techniques are motivated by our current lack of knowledge concerning the structure of large-scale error in either land surface modeling output or remotely-sensed estimat...
AFCI-2.0 Library of Neutron Cross Section Covariances
Herman, M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.; Hoblit,S.; Mughabghab,S.F.; Sonzogni,A.; Talou,P.; Chadwick,M.B.; Hale.G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G.
2011-06-26
Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular new LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.
Covariates of intravenous paracetamol pharmacokinetics in adults
2014-01-01
Background Pharmacokinetic estimates for intravenous paracetamol in individual adult cohorts are different to a certain extent, and understanding the covariates of these differences may guide dose individualization. In order to assess covariate effects of intravenous paracetamol disposition in adults, pharmacokinetic data on discrete studies were pooled. Methods This pooled analysis was based on 7 studies, resulting in 2755 time-concentration observations in 189 adults (mean age 46 SD 23 years; weight 73 SD 13 kg) given intravenous paracetamol. The effects of size, age, pregnancy and other clinical settings (intensive care, high dependency, orthopaedic or abdominal surgery) on clearance and volume of distribution were explored using non-linear mixed effects models. Results Paracetamol disposition was best described using normal fat mass (NFM) with allometric scaling as a size descriptor. A three-compartment linear disposition model revealed that the population parameter estimates (between subject variability,%) were central volume (V1) 24.6 (55.5%) L/70 kg with peripheral volumes of distribution V2 23.1 (49.6%) L/70 kg and V3 30.6 (78.9%) L/70 kg. Clearance (CL) was 16.7 (24.6%) L/h/70 kg and inter-compartment clearances were Q2 67.3 (25.7%) L/h/70 kg and Q3 2.04 (71.3%) L/h/70 kg. Clearance and V2 decreased only slightly with age. Sex differences in clearance were minor and of no significance. Clearance, relative to median values, was increased during pregnancy (FPREG = 1.14) and decreased during abdominal surgery (FABDCL = 0.715). Patients undergoing orthopaedic surgery had a reduced V2 (FORTHOV = 0.649), while those in intensive care had increased V2 (FICV = 1.51). Conclusions Size and age are important covariates for paracetamol pharmacokinetics explaining approximately 40% of clearance and V2 variability. Dose individualization in adult subpopulations would achieve little benefit in the scenarios explored. PMID:25342929
Flux Partitioning by Isotopic Eddy Covariance
NASA Astrophysics Data System (ADS)
Wehr, R.; Munger, J. W.; Nelson, D. D.; McManus, J. B.; Zahniser, M. S.; Wofsy, S. C.; Saleska, S. R.
2011-12-01
Net ecosystem-atmosphere exchange of CO2 is routinely measured by eddy covariance at sites around the world, but studies of ecosystem processes are more interested in the gross photosynthetic and respiratory fluxes that comprise the net flux. The standard method of partitioning the net flux into these components has been to extrapolate nighttime respiration into daytime based on a relationship between nighttime respiration, temperature, and sometimes moisture. However, such relationships generally account for only a small portion of the variation in nighttime respiration, and the assumption that they can predict respiration throughout the day is dubious. A promising alternate method, known as isotopic flux partitioning, works by identifying the stable isotopic signatures of photosynthesis and respiration in the CO2 flux. We have used this method to partition the net flux at Harvard Forest, MA, based on eddy covariance measurements of the net 12CO2 and 13CO2 fluxes (as well as measurements of the sensible and latent heat fluxes and other meteorological variables). The CO2 isotopologues were measured at 4 Hz by an Aerodyne quantum cascade laser spectrometer with a δ13C precision of 0.4 % in 0.25 sec and 0.02 % in 100 sec. In the absence of such high-frequency, high-precision isotopic measurements, past attempts at isotopic flux partitioning have combined isotopic flask measurements with high-frequency (total) CO2 measurements to estimate the isoflux (the EC/flask approach). Others have used a conditional flask sampling approach called hyperbolic relaxed eddy accumulation (HREA). We 'sampled' our data according to each of these approaches, for comparison, and found disagreement in the calculated fluxes of ~10% for the EC/flask approach, and ~30% for HREA, at midday. To our knowledge, this is the first example of flux partitioning by isotopic eddy covariance. Wider use of this method, enabled by a new generation of laser spectrometers, promises to open a new window
Testing of NASA LaRC Materials under MISSE 6 and MISSE 7 Missions
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.
2009-01-01
The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment for several months. MISSE missions provide an opportunity for developing space qualifiable materials. Two lasers and a few optical components from NASA Langley Research Center (LaRC) were included in the MISSE 6 mission for long term exposure. MISSE 6 items were characterized and packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. MISSE 6 was transported to the international Space Station (ISS) via STS 123 on March 11. 2008. The astronauts successfully attached the PEC to external handrails of the ISS and opened the PEC for long term exposure to the space environment. The current plan is to bring the MISSE 6 PEC back to the Earth via STS 128 mission scheduled for launch in August 2009. Currently, preparations for launching the MISSE 7 mission are progressing. Laser and lidar components assembled on a flight-worthy platform are included from NASA LaRC. MISSE 7 launch is scheduled to be launched on STS 129 mission. This paper will briefly review recent efforts on MISSE 6 and MISSE 7 missions at NASA Langley Research Center (LaRC).
NASA Astrophysics Data System (ADS)
Skordis, Constantinos
2006-11-01
A relativistic theory of gravity has recently been proposed by Bekenstein, where gravity is mediated by a tensor, a vector, and a scalar field, thus called TeVeS. The theory aims at modifying gravity in such a way as to reproduce Milgrom’s modified Newtonian dynamics (MOND) in the weak field, nonrelativistic limit, which provides a framework to solve the missing mass problem in galaxies without invoking dark matter. In this paper I apply a covariant approach to formulate the cosmological equations for this theory, for both the background and linear perturbations. I derive the necessary perturbed equations for scalar, vector, and tensor modes without adhering to a particular gauge. Special gauges are considered in the appendixes.
Covariant entropy bound and loop quantum cosmology
Ashtekar, Abhay; Wilson-Ewing, Edward
2008-09-15
We examine Bousso's covariant entropy bound conjecture in the context of radiation filled, spatially flat, Friedmann-Robertson-Walker models. The bound is violated near the big bang. However, the hope has been that quantum gravity effects would intervene and protect it. Loop quantum cosmology provides a near ideal setting for investigating this issue. For, on the one hand, quantum geometry effects resolve the singularity and, on the other hand, the wave function is sharply peaked at a quantum corrected but smooth geometry, which can supply the structure needed to test the bound. We find that the bound is respected. We suggest that the bound need not be an essential ingredient for a quantum gravity theory but may emerge from it under suitable circumstances.
Covariant Lyapunov analysis of chaotic Kolmogorov flows.
Inubushi, Masanobu; Kobayashi, Miki U; Takehiro, Shin-ichi; Yamada, Michio
2012-01-01
Hyperbolicity is an important concept in dynamical system theory; however, we know little about the hyperbolicity of concrete physical systems including fluid motions governed by the Navier-Stokes equations. Here, we study numerically the hyperbolicity of the Navier-Stokes equation on a two-dimensional torus (Kolmogorov flows) using the method of covariant Lyapunov vectors developed by Ginelli et al. [Phys. Rev. Lett. 99, 130601 (2007)]. We calculate the angle between the local stable and unstable manifolds along an orbit of chaotic solution to evaluate the hyperbolicity. We find that the attractor of chaotic Kolmogorov flows is hyperbolic at small Reynolds numbers, but that smaller angles between the local stable and unstable manifolds are observed at larger Reynolds numbers, and the attractor appears to be nonhyperbolic at a certain Reynolds numbers. Also, we observed some relations between these hyperbolic properties and physical properties such as time correlation of the vorticity and the energy dissipation rate. PMID:22400681
Generation of phase-covariant quantum cloning
Karimipour, V.; Rezakhani, A.T.
2002-11-01
It is known that in phase-covariant quantum cloning, the equatorial states on the Bloch sphere can be cloned with a fidelity higher than the optimal bound established for universal quantum cloning. We generalize this concept to include other states on the Bloch sphere with a definite z component of spin. It is shown that once we know the z component, we can always clone a state with a fidelity higher than the universal value and that of equatorial states. We also make a detailed study of the entanglement properties of the output copies and show that the equatorial states are the only states that give rise to a separable density matrix for the outputs.
EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.
HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.
2007-04-22
The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.
Covariant chronogeometry and extreme distances: Elementary particles
Segal, I. E.; Jakobsen, H. P.; Ørsted, B.; Paneitz, S. M.; Speh, B.
1981-01-01
We study a variant of elementary particle theory in which Minkowski space, M0, is replaced by a natural alternative, the unique four-dimensional manifold ¯M with comparable properties of causality and symmetry. Free particles are considered to be associated (i) with positive-energy representations in bundles of prescribed spin over ¯M of the group of causality-preserving transformations on ¯M (or its mass-conserving subgroup) and (ii) with corresponding wave equations. In this study these bundles, representations, and equations are detailed, and some of their basic features are developed in the cases of spins 0 and ½. Preliminaries to a general study are included; issues of covariance, unitarity, and positivity of the energy are treated; appropriate quantum numbers are indicated; and possible physical applications are discussed. PMID:16593075
Covariant generalization of cosmological perturbation theory
Enqvist, Kari; Hoegdahl, Janne; Nurmi, Sami; Vernizzi, Filippo
2007-01-15
We present an approach to cosmological perturbations based on a covariant perturbative expansion between two worldlines in the real inhomogeneous universe. As an application, at an arbitrary order we define an exact scalar quantity which describes the inhomogeneities in the number of e-folds on uniform density hypersurfaces and which is conserved on all scales for a barotropic ideal fluid. We derive a compact form for its conservation equation at all orders and assign it a simple physical interpretation. To make a comparison with the standard perturbation theory, we develop a method to construct gauge-invariant quantities in a coordinate system at arbitrary order, which we apply to derive the form of the nth order perturbation in the number of e-folds on uniform density hypersurfaces and its exact evolution equation. On large scales, this provides the gauge-invariant expression for the curvature perturbation on uniform density hypersurfaces and its evolution equation at any order.
Conformal killing tensors and covariant Hamiltonian dynamics
Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.
2014-12-15
A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.
Covariant non-commutative space-time
NASA Astrophysics Data System (ADS)
Heckman, Jonathan J.; Verlinde, Herman
2015-05-01
We introduce a covariant non-commutative deformation of 3 + 1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space-time isometries. The non-commutative algebra is defined on space-times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so (5, 1), while for AdS4 it assembles into so (4, 2). The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.
A covariance analysis algorithm for interconnected systems
NASA Technical Reports Server (NTRS)
Cheng, Victor H. L.; Curley, Robert D.; Lin, Ching-An
1987-01-01
A covariance analysis algorithm for propagation of signal statistics in arbitrarily interconnected nonlinear systems is presented which is applied to six-degree-of-freedom systems. The algorithm uses statistical linearization theory to linearize the nonlinear subsystems, and the resulting linearized subsystems are considered in the original interconnection framework for propagation of the signal statistics. Some nonlinearities commonly encountered in six-degree-of-freedom space-vehicle models are referred to in order to illustrate the limitations of this method, along with problems not encountered in standard deterministic simulation analysis. Moreover, the performance of the algorithm shall be numerically exhibited by comparing results using such techniques to Monte Carlo analysis results, both applied to a simple two-dimensional space-intercept problem.
A covariant treatment of cosmic parallax
Räsänen, Syksy
2014-03-01
The Gaia satellite will soon probe parallax on cosmological distances. Using the covariant formalism and considering the angle between a pair of sources, we find parallax for both spacelike and timelike separation between observation points. Our analysis includes both intrinsic parallax and parallax due to observer motion. We propose a consistency condition that tests the FRW metric using the parallax distance and the angular diameter distance. This test is purely kinematic and relies only on geometrical optics, it is independent of matter content and its relation to the spacetime geometry. We study perturbations around the FRW model, and find that they should be taken into account when analysing observations to determine the parallax distance.
Nuclear Forensics Analysis with Missing and Uncertain Data
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
2015-10-05
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
A Product Partition Model With Regression on Covariates
Müller, Peter; Quintana, Fernando; Rosner, Gary L.
2011-01-01
We propose a probability model for random partitions in the presence of covariates. In other words, we develop a model-based clustering algorithm that exploits available covariates. The motivating application is predicting time to progression for patients in a breast cancer trial. We proceed by reporting a weighted average of the responses of clusters of earlier patients. The weights should be determined by the similarity of the new patient’s covariate with the covariates of patients in each cluster. We achieve the desired inference by defining a random partition model that includes a regression on covariates. Patients with similar covariates are a priori more likely to be clustered together. Posterior predictive inference in this model formalizes the desired prediction. We build on product partition models (PPM). We define an extension of the PPM to include a regression on covariates by including in the cohesion function a new factor that increases the probability of experimental units with similar covariates to be included in the same cluster. We discuss implementations suitable for any combination of continuous, categorical, count, and ordinal covariates. An implementation of the proposed model as R-package is available for download. PMID:21566678
How much do genetic covariances alter the rate of adaptation?
Agrawal, Aneil F.; Stinchcombe, John R.
2008-01-01
Genetically correlated traits do not evolve independently, and the covariances between traits affect the rate at which a population adapts to a specified selection regime. To measure the impact of genetic covariances on the rate of adaptation, we compare the rate fitness increases given the observed G matrix to the expected rate if all the covariances in the G matrix are set to zero. Using data from the literature, we estimate the effect of genetic covariances in real populations. We find no net tendency for covariances to constrain the rate of adaptation, though the quality and heterogeneity of the data limit the certainty of this result. There are some examples in which covariances strongly constrain the rate of adaptation but these are balanced by counter examples in which covariances facilitate the rate of adaptation; in many cases, covariances have little or no effect. We also discuss how our metric can be used to identify traits or suites of traits whose genetic covariances to other traits have a particularly large impact on the rate of adaptation. PMID:19129097
Subirà, Marta; Cano, Marta; de Wit, Stella J.; Alonso, Pino; Cardoner, Narcís; Hoexter, Marcelo Q.; Kwon, Jun Soo; Nakamae, Takashi; Lochner, Christine; Sato, João R.; Jung, Wi Hoon; Narumoto, Jin; Stein, Dan J.; Pujol, Jesus; Mataix-Cols, David; Veltman, Dick J.; Menchón, José M.; van den Heuvel, Odile A.; Soriano-Mas, Carles
2016-01-01
Background Frontostriatal and frontoamygdalar connectivity alterations in patients with obsessive–compulsive disorder (OCD) have been typically described in functional neuroimaging studies. However, structural covariance, or volumetric correlations across distant brain regions, also provides network-level information. Altered structural covariance has been described in patients with different psychiatric disorders, including OCD, but to our knowledge, alterations within frontostriatal and frontoamygdalar circuits have not been explored. Methods We performed a mega-analysis pooling structural MRI scans from the Obsessive–compulsive Brain Imaging Consortium and assessed whole-brain voxel-wise structural covariance of 4 striatal regions (dorsal and ventral caudate nucleus, and dorsal-caudal and ventral-rostral putamen) and 2 amygdalar nuclei (basolateral and centromedial-superficial). Images were preprocessed with the standard pipeline of voxel-based morphometry studies using Statistical Parametric Mapping software. Results Our analyses involved 329 patients with OCD and 316 healthy controls. Patients showed increased structural covariance between the left ventral-rostral putamen and the left inferior frontal gyrus/frontal operculum region. This finding had a significant interaction with age; the association held only in the subgroup of older participants. Patients with OCD also showed increased structural covariance between the right centromedial-superficial amygdala and the ventromedial prefrontal cortex. Limitations This was a cross-sectional study. Because this is a multisite data set analysis, participant recruitment and image acquisition were performed in different centres. Most patients were taking medication, and treatment protocols differed across centres. Conclusion Our results provide evidence for structural network–level alterations in patients with OCD involving 2 frontosubcortical circuits of relevance for the disorder and indicate that structural
MISSE 6-Testing Materials in Space
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S; Kinard, William H.
2008-01-01
The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment by placing them in space environment for several months. In this paper, a few materials and components from NASA Langley Research Center (LaRC) that have been flown on MISSE 6 mission will be discussed. These include laser and optical elements for photonic devices. The pre-characterized MISSE 6 materials were packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. Subsequently, the MISSE 6 PEC was transported by the STS-123 mission to International Space Station (ISS) on March 11, 2008. The astronauts successfully attached the PEC to external handrails and opened the PEC for long term exposure to the space environment.
ADHD More Often Missed in Minority Kids
... page: https://medlineplus.gov/news/fullstory_160571.html ADHD More Often Missed in Minority Kids Study found ... percentage of black children show the symptoms of attention-deficit/hyperactivity disorder (ADHD) than white kids, they are less likely ...
Missed Radiation Therapy and Cancer Recurrence
Patients who miss radiation therapy sessions during cancer treatment have an increased risk of their disease returning, even if they eventually complete their course of radiation treatment, according to a new study.
Discovery of a missing disease spreader
NASA Astrophysics Data System (ADS)
Maeno, Yoshiharu
2011-10-01
This study presents a method to discover an outbreak of an infectious disease in a region for which data are missing, but which is at work as a disease spreader. Node discovery for the spread of an infectious disease is defined as discriminating between the nodes which are neighboring to a missing disease spreader node, and the rest, given a dataset on the number of cases. The spread is described by stochastic differential equations. A perturbation theory quantifies the impact of the missing spreader on the moments of the number of cases. Statistical discriminators examine the mid-body or tail-ends of the probability density function, and search for the disturbance from the missing spreader. They are tested with computationally synthesized datasets, and applied to the SARS outbreak and flu pandemic.
Men Miss Out on Bone Loss Screening
... page: https://medlineplus.gov/news/fullstory_158810.html Men Miss Out on Bone Loss Screening Yet, millions ... THURSDAY, May 12, 2016 (HealthDay News) -- Unlike women, men at risk for osteoporosis don't get routinely ...
How Rutherford missed discovering quantum mechanical identity
NASA Astrophysics Data System (ADS)
Temmer, G. M.
1989-03-01
An interesting quirk in the energy dependence of alpha-particle scattering from helium caused Lord Rutherford to miss a major discovery—namely, the consequences of quantum mechanical identity—before their prediction by Mott a short time later.
Missing solution in a Cornell potential
NASA Astrophysics Data System (ADS)
Castro, L. B.; de Castro, A. S.
2013-11-01
Missing bound-state solutions for fermions in the background of a Cornell potential consisting of a mixed scalar-vector-pseudoscalar coupling is examined. Charge-conjugation operation, degeneracy and localization are discussed.
Clustering with Missing Values: No Imputation Required
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri
2004-01-01
Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.
Men Miss Out on Bone Loss Screening
... nlm.nih.gov/medlineplus/news/fullstory_158810.html Men Miss Out on Bone Loss Screening Yet, millions ... THURSDAY, May 12, 2016 (HealthDay News) -- Unlike women, men at risk for osteoporosis don't get routinely ...
Diet History Questionnaire II: Missing & Error Codes
A missing code indicates that the respondent skipped a question when a response was required. An error character indicates that the respondent marked two or more responses to a question where only one answer was appropriate.
The missing international link for carbon control
Ferrey, Steven
2009-04-15
Ultimately, the challenge is not technological, nor even financial. The challenge is legal and regulatory, and the missing link is the institutional mechanism and model to steer and implement low-carbon power choices in developing countries. (author)
Some Activities of MISSE 6 Mission
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.
2009-01-01
The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment for several months. In this paper, a few laser and optical elements from NASA Langley Research Center (LaRC) that have been flown on MISSE 6 mission will be discussed. These items were characterized and packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. Subsequently, the MISSE 6 PEC was transported by the STS-123 mission to International Space Station (ISS) on March 11, 2008. The astronauts successfully attached the PEC to external handrails and opened the PEC for long term exposure to the space environment. The plan is to retrieve the MISSE 6 PEC by STS-128 mission in August 2009.
Analysis of longitudinal data from animals with missing values using SPSS.
Duricki, Denise A; Soleman, Sara; Moon, Lawrence D F
2016-06-01
Testing of therapies for disease or injury often involves the analysis of longitudinal data from animals. Modern analytical methods have advantages over conventional methods (particularly when some data are missing), yet they are not used widely by preclinical researchers. Here we provide an easy-to-use protocol for the analysis of longitudinal data from animals, and we present a click-by-click guide for performing suitable analyses using the statistical package IBM SPSS Statistics software (SPSS). We guide readers through the analysis of a real-life data set obtained when testing a therapy for brain injury (stroke) in elderly rats. If a few data points are missing, as in this example data set (for example, because of animal dropout), repeated-measures analysis of covariance may fail to detect a treatment effect. An alternative analysis method, such as the use of linear models (with various covariance structures), and analysis using restricted maximum likelihood estimation (to include all available data) can be used to better detect treatment effects. This protocol takes 2 h to carry out. PMID:27196723
Missing Mass Measurement Using Kinematic Cusp
Kim, Ian-Woo
2010-02-10
We propose a new method for mass measurement of missing energy particle using cusp structure in the kinematic distribution. We consider a resonance particle decay into a pair of missing energy particles and a pair of visible particles and show invariant mass and angular distribution have non-smooth profiles. The cusp location only depends on mass parameters. Invariant mass and angular distribution are complementary in visibility of the cusp.
Winnicott and Lacan: a missed encounter?
Vanier, Alain
2012-04-01
Winnicott was able to say that Lacan's paper on the mirror stage "had certainly influenced" him, while Lacan argued that he found his object a in Winnicott's transitional object. By following the development of their personal relations, as well as of their theoretical discussions, it is possible to argue that this was a missed encounter--yet a happily missed one, since the misunderstandings of their theoretical exchanges allowed each of them to clarify concepts otherwise difficult to discern. PMID:22768481
Estimated Environmental Exposures for MISSE-7B
NASA Technical Reports Server (NTRS)
Finckenor, Miria M.; Moore, Chip; Norwood, Joseph K.; Henrie, Ben; DeGroh, Kim
2012-01-01
This paper details the 18-month environmental exposure for Materials International Space Station Experiment 7B (MISSE-7B) ram and wake sides. This includes atomic oxygen, ultraviolet radiation, particulate radiation, thermal cycling, meteoroid/space debris impacts, and observed contamination. Atomic oxygen fluence was determined by measured mass and thickness loss of polymers of known reactivity. Diodes sensitive to ultraviolet light actively measured solar radiation incident on the experiment. Comparisons to earlier MISSE flights are discussed.
Ergodicity test of the eddy-covariance technique
NASA Astrophysics Data System (ADS)
Chen, J.; Hu, Y.; Yu, Y.; Lü, S.
2015-09-01
The ergodic hypothesis is a basic hypothesis typically invoked in atmospheric surface layer (ASL) experiments. The ergodic theorem of stationary random processes is introduced to analyse and verify the ergodicity of atmospheric turbulence measured using the eddy-covariance technique with two sets of field observational data. The results show that the ergodicity of atmospheric turbulence in atmospheric boundary layer (ABL) is relative not only to the atmospheric stratification but also to the eddy scale of atmospheric turbulence. The eddies of atmospheric turbulence, of which the scale is smaller than the scale of the ABL (i.e. the spatial scale is less than 1000 m and temporal scale is shorter than 10 min), effectively satisfy the ergodic theorems. Under these restrictions, a finite time average can be used as a substitute for the ensemble average of atmospheric turbulence, whereas eddies that are larger than ABL scale dissatisfy the mean ergodic theorem. Consequently, when a finite time average is used to substitute for the ensemble average, the eddy-covariance technique incurs large errors due to the loss of low-frequency information associated with larger eddies. A multi-station observation is compared with a single-station observation, and then the scope that satisfies the ergodic theorem is extended from scales smaller than the ABL, approximately 1000 m to scales greater than about 2000 m. Therefore, substituting the finite time average for the ensemble average of atmospheric turbulence is more faithfully approximate the actual values. Regardless of vertical velocity or temperature, the variance of eddies at different scales follows Monin-Obukhov similarity theory (MOST) better if the ergodic theorem can be satisfied; if not it deviates from MOST. The exploration of ergodicity in atmospheric turbulence is doubtlessly helpful in understanding the issues in atmospheric turbulent observations and provides a theoretical basis for overcoming related difficulties.
Part Marking and Identification Materials' for MISSE
NASA Technical Reports Server (NTRS)
Roxby, Donald; Finckenor, Miria M.
2008-01-01
The Materials on International Space Station Experiment (MISSE) is being conducted with funding from NASA and the U.S. Department of Defense, in order to evaluate candidate materials and processes for flight hardware. MISSE modules include test specimens used to validate NASA technical standards for part markings exposed to harsh environments in low-Earth orbit and space, including: atomic oxygen, ultraviolet radiation, thermal vacuum cycling, and meteoroid and orbital debris impact. Marked test specimens are evaluated and then mounted in a passive experiment container (PEC) that is affixed to an exterior surface on the International Space Station (ISS). They are exposed to atomic oxygen and/or ultraviolet radiation for a year or more before being retrieved and reevaluated. Criteria include percent contrast, axial uniformity, print growth, error correction, and overall grade. MISSE 1 and 2 (2001-2005), MISSE 3 and 4 (2006-2007), and MISSE 5 (2005-2006) have been completed to date. Acceptable results were found for test specimens marked with Data Matrix(TradeMark) symbols by Intermec Inc. and Robotic Vision Systems Inc using: laser bonding, vacuum arc vapor deposition, gas assisted laser etch, chemical etch, mechanical dot peening, laser shot peening, laser etching, and laser induced surface improvement. MISSE 6 (2008-2009) is exposing specimens marked by DataLase(Registed TradeMark), Chemico technologies Inc., Intermec Inc., and tesa with laser-markable paint, nanocode tags, DataLase and tesa laser markings, and anodized metal labels.
MISSE 1 and 2 Tray Temperature Measurements
NASA Technical Reports Server (NTRS)
Harvey, Gale A.; Kinard, William H.
2006-01-01
The Materials International Space Station Experiment (MISSE 1 & 2) was deployed August 10,2001 and retrieved July 30,2005. This experiment is a co-operative endeavor by NASA-LaRC. NASA-GRC, NASA-MSFC, NASA-JSC, the Materials Laboratory at the Air Force Research Laboratory, and the Boeing Phantom Works. The objective of the experiment is to evaluate performance, stability, and long term survivability of materials and components planned for use by NASA and DOD on future LEO, synchronous orbit, and interplanetary space missions. Temperature is an important parameter in the evaluation of space environmental effects on materials. The MISSE 1 & 2 had autonomous temperature data loggers to measure the temperature of each of the four experiment trays. The MISSE tray-temperature data loggers have one external thermistor data channel, and a 12 bit digital converter. The MISSE experiment trays were exposed to the ISS space environment for nearly four times the nominal design lifetime for this experiment. Nevertheless, all of the data loggers provided useful temperature measurements of MISSE. The temperature measurement system has been discussed in a previous paper. This paper presents temperature measurements of MISSE payload experiment carriers (PECs) 1 and 2 experiment trays.
Alfred Stadler, Franz Gross
2010-10-01
We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.
Bryan, M.F.; Piepel, G.F.; Simpson, D.B.
1996-03-01
The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.
Contextualized Network Analysis: Theory and Methods for Networks with Node Covariates
NASA Astrophysics Data System (ADS)
Binkiewicz, Norbert M.
Biological and social systems consist of myriad interacting units. The interactions can be intuitively represented in the form of a graph or network. Measurements of these graphs can reveal the underlying structure of these interactions, which provides insight into the systems that generated the graphs. Moreover, in applications such as neuroconnectomics, social networks, and genomics, graph data is accompanied by contextualizing measures on each node. We leverage these node covariates to help uncover latent communities, using a modification of spectral clustering. Statistical guarantees are provided under a joint mixture model called the node contextualized stochastic blockmodel, including a bound on the mis-clustering rate. For most simulated conditions, covariate assisted spectral clustering yields superior results relative to both regularized spectral clustering without node covariates and an adaptation of canonical correlation analysis. We apply covariate assisted spectral clustering to large brain graphs derived from diffusion MRI, using the node locations or neurological regions as covariates. In both cases, covariate assisted spectral clustering yields clusters that are easier to interpret neurologically. A low rank update algorithm is developed to reduce the computational cost of determining the tuning parameter for covariate assisted spectral clustering. As simulations demonstrate, the low rank update algorithm increases the speed of covariate assisted spectral clustering up to ten-fold, while practically matching the clustering performance of the standard algorithm. Graphs with node attributes are sometimes accompanied by ground truth labels that align closely with the latent communities in the graph. We consider the example of a mouse retina neuron network accompanied by the neuron spatial location and neuronal cell types. In this example, the neuronal cell type is considered a ground truth label. Current approaches for defining neuronal cell type vary
ERIC Educational Resources Information Center
Karabatsos, G.; Walker, S.G.
2010-01-01
Causal inference is central to educational research, where in data analysis the aim is to learn the causal effects of educational treatments on academic achievement, to evaluate educational policies and practice. Compared to a correlational analysis, a causal analysis enables policymakers to make more meaningful statements about the efficacy of…
NASA Astrophysics Data System (ADS)
Hirschi, M.; Seneviratne, S. I.
2009-04-01
In a previous paper, negative correlations between domain-averaged spring and autumn precipitation of the same year were found in two domains covering France and Central Europe for the period 1972-1990 (Hirschi et al. 2007). Here we further investigate this link and its temporal evolution over France during the 20th century and relate it to the atmospheric circulation. The link is analyzed using observational data sets of precipitation, mean sea level pressure and teleconnection patterns. Moreover, we analyze various global and regional climate models in terms of this phenomenon. The temporal evolution of the described link in precipitation over France is analyzed over the 20th century by means of a running correlation with a 30-year time window. The investigation of various observational precipitation data sets reveals a decreasing trend in the spring to autumn correlations, which become significantly negative in the second half of the last century. These negative correlations can be explained by significantly negative spring to autumn correlations in observed mean sea level pressure, and by the significantly negatively correlated spring East Atlantic and autumn Scandinavian teleconnection patterns. Except for the ERA-40 driven regional climate models from ENSEMBLES, the analyzed regional and global climate models, including IPCC AR4 simulations, do not capture this observed variability in precipitation. This is associated with a failure of most models in simulating the observed correlations between spring and autumn mean sea level pressure. References: Hirschi, M., S. I. Seneviratne, S. Hagemann, and C. Schär (2007). Analysis of seasonal terrestrial water storage variations in regional climate simulations over Europe. J. Geophys. Res., 112(D22109):doi:10.1029/2006JD008338.
Sartori, E.
1992-12-31
This paper presents a brief review of computer codes concerned with checking, plotting, processing and using of covariances of neutron cross-section data. It concentrates on those available from the computer code information centers of the United States and the OECD/Nuclear Energy Agency. Emphasis will be placed also on codes using covariances for specific applications such as uncertainty analysis, data adjustment and data consistency analysis. Recent evaluations contain neutron cross section covariance information for all isotopes of major importance for technological applications of nuclear energy. It is therefore important that the available software tools needed for taking advantage of this information are widely known as hey permit the determination of better safety margins and allow the optimization of more economic, I designs of nuclear energy systems.
A pure $S$-wave covariant model for the nucleon
Franz Gross; G. Ramalho; M.T. Pena
2008-01-01
Using the manifestly covariant spectator theory, and modeling the nucleon as a system of three constituent quarks with their own electromagnetic structure, we show that all four nucleon electromagnetic form factors can be very well described by a manifestly covariant nucleon wave function with zero orbital angular momentum.
Handling Correlations between Covariates and Random Slopes in Multilevel Models
ERIC Educational Resources Information Center
Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…
Covariation Is a Poor Measure of Molecular Coevolution.
Talavera, David; Lovell, Simon C; Whelan, Simon
2015-09-01
Recent developments in the analysis of amino acid covariation are leading to breakthroughs in protein structure prediction, protein design, and prediction of the interactome. It is assumed that observed patterns of covariation are caused by molecular coevolution, where substitutions at one site affect the evolutionary forces acting at neighboring sites. Our theoretical and empirical results cast doubt on this assumption. We demonstrate that the strongest coevolutionary signal is a decrease in evolutionary rate and that unfeasibly long times are required to produce coordinated substitutions. We find that covarying substitutions are mostly found on different branches of the phylogenetic tree, indicating that they are independent events that may or may not be attributable to coevolution. These observations undermine the hypothesis that molecular coevolution is the primary cause of the covariation signal. In contrast, we find that the pairs of residues with the strongest covariation signal tend to have low evolutionary rates, and that it is this low rate that gives rise to the covariation signal. Slowly evolving residue pairs are disproportionately located in the protein's core, which explains covariation methods' ability to detect pairs of residues that are close in three dimensions. These observations lead us to propose the "coevolution paradox": The strength of coevolution required to cause coordinated changes means the evolutionary rate is so low that such changes are highly unlikely to occur. As modern covariation methods may lead to breakthroughs in structural genomics, it is critical to recognize their biases and limitations. PMID:25944916
Application of covariant analytic mechanics to gravity with Dirac field
NASA Astrophysics Data System (ADS)
Nakajima, Satoshi
2016-03-01
We applied the covariant analytic mechanics with the differential forms to the Dirac field and the gravity with the Dirac field. The covariant analytic mechanics treats space and time on an equal footing regarding the differential forms as the basis variables. A significant feature of the covariant analytic mechanics is that the canonical equations, in addition to the Euler-Lagrange equation, are not only manifestly general coordinate covariant but also gauge covariant. Combining our study and the previous works (the scalar field, the abelian and non-abelian gauge fields and the gravity without the Dirac field), the applicability of the covariant analytic mechanics was checked for all fundamental fields. We studied both the first and second order formalism of the gravitational field coupled with matters including the Dirac field. It was suggested that gravitation theories including higher order curvatures cannot be treated by the second order formalism in the covariant analytic mechanics. In addition, we showed that the covariant analytic mechanics is equivalent to corrected De Donder-Weyl theory.
CHANGING THE SUPPORT OF A SPATIAL COVARIATE: A SIMULATION STUDY
Technology Transfer Automated Retrieval System (TEKTRAN)
Researchers are increasingly able to capture spatially referenced data on both a response and a covariate more frequently and in more detail. A combination of geostatisical models and analysis of covariance methods is used to analyze such data. However, basic questions regarding the effects of using...
Performance of internal covariance estimators for cosmic shear correlation functions
NASA Astrophysics Data System (ADS)
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2016-03-01
Data re-sampling methods such as delete-one jackknife, bootstrap or the sub-sample covariance are common tools for estimating the covariance of large-scale structure probes. We investigate different implementations of these methods in the context of cosmic shear two-point statistics. Using lognormal simulations of the convergence field and the corresponding shear field we generate mock catalogues of a known and realistic covariance. For a survey of {˜ } 5000 ° ^2 we find that jackknife, if implemented by deleting sub-volumes of galaxies, provides the most reliable covariance estimates. Bootstrap, in the common implementation of drawing sub-volumes of galaxies, strongly overestimates the statistical uncertainties. In a forecast for the complete 5-yr Dark Energy Survey, we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the Ωm-σ8 plane as measured with internally estimated covariance matrices is on average ≳85 per cent of the volume derived from the true covariance matrix. The uncertainty on the parameter combination Σ _8 ˜ σ _8 Ω _m^{0.5} derived from internally estimated covariances is ˜90 per cent of the true uncertainty.
Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies
ERIC Educational Resources Information Center
Chen, Jianshen; Kaplan, David
2015-01-01
Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…
The Regression Trunk Approach to Discover Treatment Covariate Interaction
ERIC Educational Resources Information Center
Dusseldorp, Elise; Meulman, Jacqueline J.
2004-01-01
The regression trunk approach (RTA) is an integration of regression trees and multiple linear regression analysis. In this paper RTA is used to discover treatment covariate interactions, in the regression of one continuous variable on a treatment variable with "multiple" covariates. The performance of RTA is compared to the classical method of…
Conditional Covariance Theory and Detect for Polytomous Items
ERIC Educational Resources Information Center
Zhang, Jinming
2007-01-01
This paper extends the theory of conditional covariances to polytomous items. It has been proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, given an appropriately chosen composite is positive if, and only if, the two items…
Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling
ERIC Educational Resources Information Center
Lee, Taehun; Cai, Li
2012-01-01
Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…
Reconstruction of missing daily streamflow data using dynamic regression models
NASA Astrophysics Data System (ADS)
Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault
2015-12-01
River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.
NASA Astrophysics Data System (ADS)
Berge, Léonie; Litaize, Olivier; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Pénéliau, Yannick; Regnier, David
2016-02-01
As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS) are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.
UDU/T/ covariance factorization for Kalman filtering
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1980-01-01
There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790
Feature Selection with Missing Data
ERIC Educational Resources Information Center
Sarkar, Saurabh
2013-01-01
In the modern world information has become the new power. An increasing amount of efforts are being made to gather data, resources being allocated, time being invested and tools being developed. Data collection is no longer a myth; however, it remains a great challenge to create value out of the enormous data that is being collected. Data modeling…
Family Learning: The Missing Exemplar
ERIC Educational Resources Information Center
Dentzau, Michael W.
2013-01-01
As a supporter of informal and alternative learning environments for science learning I am pleased to add to the discussion generated by Adriana Briseno-Garzon's article, "More than science: family learning in a Mexican science museum". I am keenly aware of the value of active family involvement in education in general, and science education in…
ICT and Pedagogy: Opportunities Missed?
ERIC Educational Resources Information Center
Adams, Paul
2011-01-01
The pace of Information and Communications Technology (ICT) development necessitates radical and rapid change for education. Given the English prevalence for an economically determinist orientation for educational outcomes, it seems pertinent to ask how learning in relation to ICT is to be conceptualised. Accepting the view that education needs to…
ERIC Educational Resources Information Center
Weinberg, Francine
Publishers of high school composition textbooks gather information about the "book market" through outside statistical analyses or case studies and through their own interviews and polls. Recently, such studies and interviews have revealed significant differences in classroom practices. Consequently, publishers are faced with such questions as (1)…
Shin, Yongyun; Raudenbush, Stephen W
2013-01-01
This article extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth-step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children. PMID:24077621
Predicting the risk of toxic blooms of golden alga from cell abundance and environmental covariates
Patino, Reynaldo; VanLandeghem, Matthew M.; Denny, Shawn
2016-01-01
Golden alga (Prymnesium parvum) is a toxic haptophyte that has caused considerable ecological damage to marine and inland aquatic ecosystems worldwide. Studies focused primarily on laboratory cultures have indicated that toxicity is poorly correlated with the abundance of golden alga cells. This relationship, however, has not been rigorously evaluated in the field where environmental conditions are much different. The ability to predict toxicity using readily measured environmental variables and golden alga abundance would allow managers rapid assessments of ichthyotoxicity potential without laboratory bioassay confirmation, which requires additional resources to accomplish. To assess the potential utility of these relationships, several a priori models relating lethal levels of golden alga ichthyotoxicity to golden alga abundance and environmental covariates were constructed. Model parameters were estimated using archived data from four river basins in Texas and New Mexico (Colorado, Brazos, Red, Pecos). Model predictive ability was quantified using cross-validation, sensitivity, and specificity, and the relative ranking of environmental covariate models was determined by Akaike Information Criterion values and Akaike weights. Overall, abundance was a generally good predictor of ichthyotoxicity as cross validation of golden alga abundance-only models ranged from ∼ 80% to ∼ 90% (leave-one-out cross-validation). Environmental covariates improved predictions, especially the ability to predict lethally toxic events (i.e., increased sensitivity), and top-ranked environmental covariate models differed among the four basins. These associations may be useful for monitoring as well as understanding the abiotic factors that influence toxicity during blooms.
Bertram, Susan M; Fitzsimmons, Lauren P; McAuley, Emily M; Rundle, Howard D; Gorelick, Root
2012-01-01
The phenotypic variance–covariance matrix (P) describes the multivariate distribution of a population in phenotypic space, providing direct insight into the appropriateness of measured traits within the context of multicollinearity (i.e., do they describe any significant variance that is independent of other traits), and whether trait covariances restrict the combinations of phenotypes available to selection. Given the importance of P, it is therefore surprising that phenotypic covariances are seldom jointly analyzed and that the dimensionality of P has rarely been investigated in a rigorous statistical framework. Here, we used a repeated measures approach to quantify P separately for populations of four cricket species using seven acoustic signaling traits thought to enhance mate attraction. P was of full or almost full dimensionality in all four species, indicating that all traits conveyed some information that was independent of the other traits, and that phenotypic trait covariances do not constrain the combinations of signaling traits available to selection. P also differed significantly among species, although the dominant axis of phenotypic variation (pmax) was largely shared among three of the species (Acheta domesticus, Gryllus assimilis, G. texensis), but different in the fourth (G. veletis). In G. veletis and A. domesticus, but not G. assimilis and G. texensis, pmax was correlated with body size, while pmax was not correlated with residual mass (a condition measure) in any of the species. This study reveals the importance of jointly analyzing phenotypic traits. PMID:22408735
Inventory Uncertainty Quantification using TENDL Covariance Data in Fispact-II
Eastwood, J.W.; Morgan, J.G.; Sublet, J.-Ch.
2015-01-15
The new inventory code Fispact-II provides predictions of inventory, radiological quantities and their uncertainties using nuclear data covariance information. Central to the method is a novel fast pathways search algorithm using directed graphs. The pathways output provides (1) an aid to identifying important reactions, (2) fast estimates of uncertainties, (3) reduced models that retain important nuclides and reactions for use in the code's Monte Carlo sensitivity analysis module. Described are the methods that are being implemented for improving uncertainty predictions, quantification and propagation using the covariance data that the recent nuclear data libraries contain. In the TENDL library, above the upper energy of the resolved resonance range, a Monte Carlo method in which the covariance data come from uncertainties of the nuclear model calculations is used. The nuclear data files are read directly by FISPACT-II without any further intermediate processing. Variance and covariance data are processed and used by FISPACT-II to compute uncertainties in collapsed cross sections, and these are in turn used to predict uncertainties in inventories and all derived radiological data.
40 CFR 98.445 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... following missing data procedures: (a) A quarterly flow rate of CO2 received that is missing must be...) A quarterly CO2 concentration of a CO2 stream received that is missing must be estimated as follows... quantity of CO2 injected that is missing must be estimated using a representative quantity of CO2...
40 CFR 98.115 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... according to the procedures in § 98.114(b) if data are missing. (b) For missing records of the monthly mass... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions...
40 CFR 75.31 - Initial missing data procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Initial missing data procedures. 75.31... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.31 Initial missing data..., or O2 concentration data, and moisture data. For each hour of missing SO2 or CO2...
40 CFR 75.31 - Initial missing data procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Initial missing data procedures. 75.31... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.31 Initial missing data..., or O2 concentration data, and moisture data. For each hour of missing SO2 or CO2...
40 CFR 75.31 - Initial missing data procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Initial missing data procedures. 75.31... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.31 Initial missing data..., or O2 concentration data, and moisture data. For each hour of missing SO2 or CO2...
40 CFR 98.115 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... according to the procedures in § 98.114(b) if data are missing. (b) For missing records of the monthly mass... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions...
Treatment of Missing Data in Workforce Education Research
ERIC Educational Resources Information Center
Gemici, Sinan; Rojewski, Jay W.; Lee, In Heok
2012-01-01
Most quantitative analyses in workforce education are affected by missing data. Traditional approaches to remedy missing data problems often result in reduced statistical power and biased parameter estimates due to systematic differences between missing and observed values. This article examines the treatment of missing data in pertinent…
40 CFR 98.295 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For the emission calculation methodologies in § 98.293(b)(2) and (b)(3), a complete... procedures used for all such missing value estimates. (a) For each missing value of the weekly composite...
40 CFR 98.385 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. You must follow the procedures for estimating missing data in § 98... estimating missing data for petroleum products in § 98.395 also applies to coal-to-liquid products....
Correcting eddy-covariance flux underestimates over a grassland.
Twine, T. E.; Kustas, W. P.; Norman, J. M.; Cook, D. R.; Houser, P. R.; Meyers, T. P.; Prueger, J. H.; Starks, P. J.; Wesely, M. L.; Environmental Research; Univ. of Wisconsin at Madison; DOE; National Aeronautics and Space Administration; National Oceanic and Atmospheric Administrationoratory
2000-06-08
Independent measurements of the major energy balance flux components are not often consistent with the principle of conservation of energy. This is referred to as a lack of closure of the surface energy balance. Most results in the literature have shown the sum of sensible and latent heat fluxes measured by eddy covariance to be less than the difference between net radiation and soil heat fluxes. This under-measurement of sensible and latent heat fluxes by eddy-covariance instruments has occurred in numerous field experiments and among many different manufacturers of instruments. Four eddy-covariance systems consisting of the same models of instruments were set up side-by-side during the Southern Great Plains 1997 Hydrology Experiment and all systems under-measured fluxes by similar amounts. One of these eddy-covariance systems was collocated with three other types of eddy-covariance systems at different sites; all of these systems under-measured the sensible and latent-heat fluxes. The net radiometers and soil heat flux plates used in conjunction with the eddy-covariance systems were calibrated independently and measurements of net radiation and soil heat flux showed little scatter for various sites. The 10% absolute uncertainty in available energy measurements was considerably smaller than the systematic closure problem in the surface energy budget, which varied from 10 to 30%. When available-energy measurement errors are known and modest, eddy-covariance measurements of sensible and latent heat fluxes should be adjusted for closure. Although the preferred method of energy balance closure is to maintain the Bowen-ratio, the method for obtaining closure appears to be less important than assuring that eddy-covariance measurements are consistent with conservation of energy. Based on numerous measurements over a sorghum canopy, carbon dioxide fluxes, which are measured by eddy covariance, are underestimated by the same factor as eddy covariance evaporation
A regulatory perspective on missing data in the aftermath of the NRC report.
LaVange, Lisa M; Permutt, Thomas
2016-07-30
The issuance of a report in 2010 by the National Research Council (NRC) of the National Academy of Sciences entitled 'The Prevention and Treatment of Missing Data in Clinical Trials,' commissioned by the US Food and Drug Administration, had an immediate impact on the way that statisticians and clinical researchers in both industry and regulatory agencies think about the missing data problem. We believe that there is currently great potential to improve study quality and interpretability-by reducing the amount of missing data through changes in trial design and conduct and by planning and conducting analyses that better account for the missing information. Here, we describe our view on some of the recommendations in the report and suggest ways in which these recommendations can be incorporated into new or ongoing clinical trials in order to improve their chance of success. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26677837
Spectral measurements of Terrestrial Mars Analogues: support for the ExoMars - Ma_Miss instrument
NASA Astrophysics Data System (ADS)
De Angelis, S.; De Sanctis, M. C.; Ammannito, E.; Di Iorio, T.; Carli, C.; Frigeri, A.; Capria, M. T.; Federico, C.; Boccaccini, A.; Capaccioni, F.; Giardino, M.; Cerroni, P.; Palomba, E.; Piccioni, G.
2013-09-01
The Ma_Miss (Mars Multispectral Imager for Subsurface Studies) instrument onboard of ExoMars 2018 mission to Mars will investigate the Martian subsoil down to a depth of 2 meters [1]. Ma_Miss is a miniaturized spectrometer, completely integrated within the drilling system of the ExoMars Pasteur rover; it will acquire spectra in the range 0.4-2.2μm, from the excavated borehole wall. The spectroscopic investigation of the subsurface materials will give us precious information about mineralogical, petrologic and geological processes, and will give insights about materials that have not been modified by surface processes such as erosion, weathering or oxidation. Spectroscopic measurements have been performed on Terrestrial Mars Analogues with the Ma_Miss laboratory model (breadboard). Moreover spectroscopic investigation of different sets of Terrestrial Mars Analogues is being carried on with different laboratory setups, as a support for the ExoMars-Ma_Miss instrument.
Super-sample covariance in simulations
NASA Astrophysics Data System (ADS)
Li, Yin; Hu, Wayne; Takada, Masahiro
2014-04-01
Using separate universe simulations, we accurately quantify super-sample covariance (SSC), the typically dominant sampling error for matter power spectrum estimators in a finite volume, which arises from the presence of super survey modes. By quantifying the power spectrum response to a background mode, this approach automatically captures the separate effects of beat coupling in the quasilinear regime, halo sample variance in the nonlinear regime and a new dilation effect which changes scales in the power spectrum coherently across the survey volume, including the baryon acoustic oscillation scale. It models these effects at typically the few percent level or better with a handful of small volume simulations for any survey geometry compared with directly using many thousands of survey volumes in a suite of large-volume simulations. The stochasticity of the response is sufficiently small that in the quasilinear regime, SSC can be alternately included by fitting the mean density in the volume with these fixed templates in parameter estimation. We also test the halo model prescription and find agreement typically at better than the 10% level for the response.
The Hopfield model revisited: covariance and quantization
NASA Astrophysics Data System (ADS)
Belgiorno, F.; Cacciatori, S. L.; Dalla Piazza, F.
2016-01-01
There are several possible applications of quantum electrodynamics in dielectric media which require a quantum description for the electromagnetic field interacting with matter fields. The associated quantum models can refer to macroscopic electromagnetic fields or, alternatively, to mesoscopic fields (polarization fields) describing an effective interaction between electromagnetic field and matter fields. We adopt the latter approach, and focus on the Hopfield model for the electromagnetic field in a dielectric dispersive medium in a framework in which space-time dependent mesoscopic parameters occur, like susceptibility, matter resonance frequency, and also coupling between electromagnetic field and polarization field. Our most direct goal is to describe in a phenomenological way a space-time varying dielectric perturbation induced by means of the Kerr effect in nonlinear dielectric media. This extension of the model is implemented by means of a Lorentz-invariant Lagrangian which, for constant microscopic parameters, and in the rest frame, coincides with the standard one. Moreover, we deduce a covariant scalar product and provide a canonical quantization scheme which takes into account the constraints implicit in the model. Examples of viable applications are indicated.
Relativistically Covariant Many-Body Perturbation Procedure
NASA Astrophysics Data System (ADS)
Lindgren, Ingvar; Salomonson, Sten; Hedendahl, Daniel
A covariant evolution operator (CEO) can be constructed, representing the time evolution of the relativistic wave unction or state vector. Like the nonrelativistic version, it contains (quasi-)singularities. The regular part is referred to as the Green’s operator (GO), which is the operator analogue of the Green’s function (GF). This operator, which is a field-theoretical concept, is closely related to the many-body wave operator and effective Hamiltonian, and it is the basic tool for our unified theory. The GO leads, when the perturbation is carried to all orders, to the Bethe-Salpeter equation (BSE) in the equal-time or effective-potential approximation. When relaxing the equal-time restriction, the procedure is fully compatible with the exact BSE. The calculations are performed in the photonic Fock space, where the number of photons is no longer constant. The procedure has been applied to helium-like ions, and the results agree well with S-matrix results in cases when comparison can be performed. In addition, evaluation of higher-order quantum-electrodynamical (QED) correlational effects has been performed, and the effects are found to be quite significant for light and medium-heavy ions.
Holographic bound in covariant loop quantum gravity
NASA Astrophysics Data System (ADS)
Tamaki, Takashi
2016-07-01
We investigate puncture statistics based on the covariant area spectrum in loop quantum gravity. First, we consider Maxwell-Boltzmann statistics with a Gibbs factor for punctures. We establish formulas which relate physical quantities such as horizon area to the parameter characterizing holographic degrees of freedom. We also perform numerical calculations and obtain consistency with these formulas. These results tell us that the holographic bound is satisfied in the large area limit and the correction term of the entropy-area law can be proportional to the logarithm of the horizon area. Second, we also consider Bose-Einstein statistics and show that the above formulas are also useful in this case. By applying the formulas, we can understand intrinsic features of Bose-Einstein condensate which corresponds to the case when the horizon area almost consists of punctures in the ground state. When this phenomena occurs, the area is approximately constant against the parameter characterizing the temperature. When this phenomena is broken, the area shows rapid increase which suggests the phase transition from quantum to classical area.
Canonical quantization of Galilean covariant field theories
NASA Astrophysics Data System (ADS)
Santos, E. S.; de Montigny, M.; Khanna, F. C.
2005-11-01
The Galilean-invariant field theories are quantized by using the canonical method and the five-dimensional Lorentz-like covariant expressions of non-relativistic field equations. This method is motivated by the fact that the extended Galilei group in 3 + 1 dimensions is a subgroup of the inhomogeneous Lorentz group in 4 + 1 dimensions. First, we consider complex scalar fields, where the Schrödinger field follows from a reduction of the Klein-Gordon equation in the extended space. The underlying discrete symmetries are discussed, and we calculate the scattering cross-sections for the Coulomb interaction and for the self-interacting term λΦ4. Then, we turn to the Dirac equation, which, upon dimensional reduction, leads to the Lévy-Leblond equations. Like its relativistic analogue, the model allows for the existence of antiparticles. Scattering amplitudes and cross-sections are calculated for the Coulomb interaction, the electron-electron and the electron-positron scattering. These examples show that the so-called 'non-relativistic' approximations, obtained in low-velocity limits, must be treated with great care to be Galilei-invariant. The non-relativistic Proca field is discussed briefly.
Generalized Covariant Gyrokinetic Dynamics of Magnetoplasmas
Cremaschini, C.; Tessarotto, M.; Nicolini, P.; Beklemishev, A.
2008-12-31
A basic prerequisite for the investigation of relativistic astrophysical magnetoplasmas, occurring typically in the vicinity of massive stellar objects (black holes, neutron stars, active galactic nuclei, etc.), is the accurate description of single-particle covariant dynamics, based on gyrokinetic theory (Beklemishev et al., 1999-2005). Provided radiation-reaction effects are negligible, this is usually based on the assumption that both the space-time metric and the EM fields (in particular the magnetic field) are suitably prescribed and are considered independent of single-particle dynamics, while allowing for the possible presence of gravitational/EM perturbations driven by plasma collective interactions which may naturally arise in such systems. The purpose of this work is the formulation of a generalized gyrokinetic theory based on the synchronous variational principle recently pointed out (Tessarotto et al., 2007) which permits to satisfy exactly the physical realizability condition for the four-velocity. The theory here developed includes the treatment of nonlinear perturbations (gravitational and/or EM) characterized locally, i.e., in the rest frame of a test particle, by short wavelength and high frequency. Basic feature of the approach is to ensure the validity of the theory both for large and vanishing parallel electric field. It is shown that the correct treatment of EM perturbations occurring in the presence of an intense background magnetic field generally implies the appearance of appropriate four-velocity corrections, which are essential for the description of single-particle gyrokinetic dynamics.
Geometric median for missing rainfall data imputation
NASA Astrophysics Data System (ADS)
Burhanuddin, Siti Nur Zahrah Amin; Deni, Sayang Mohd; Ramli, Norazan Mohamed
2015-02-01
Missing data is a common problem faced by researchers in environmental studies. Environmental data, particularly, rainfall data are highly vulnerable to be missed, which is due to several reasons, such as malfunction instrument, incorrect measurements, and relocation of stations. Rainfall data are also affected by the presence of outliers due to the temporal and spatial variability of rainfall measurements. These problems may harm the quality of rainfall data and subsequently, produce inaccuracy in the results of analysis. Thus, this study is aimed to propose an imputation method that is robust towards the presence of outliers for treating the missing rainfall data. Geometric median was applied to estimate the missing values based on the available rainfall data from neighbouring stations. The method was compared with several conventional methods, such as normal ratio and inverse distance weighting methods, in order to evaluate its performance. Thirteen rainfall stations in Peninsular Malaysia were selected for the application of the imputation methods. The results indicated that the proposed method provided the most accurate estimation values compared to both conventional methods based on the least mean absolute error. The normal ratio was found to be the worst method in estimating the missing rainfall values.
MISSE PEACE Polymers Atomic Oxygen Erosion Results
NASA Technical Reports Server (NTRS)
deGroh, Kim, K.; Banks, Bruce A.; McCarthy, Catherine E.; Rucker, Rochelle N.; Roberts, Lily M.; Berger, Lauren A.
2006-01-01
Forty-one different polymer samples, collectively called the Polymer Erosion and Contamination Experiment (PEACE) Polymers, have been exposed to the low Earth orbit (LEO) environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of Materials International Space Station Experiment 2 (MISSE 2). The objective of the PEACE Polymers experiment was to determine the atomic oxygen erosion yield of a wide variety of polymeric materials after long term exposure to the space environment. The polymers range from those commonly used for spacecraft applications, such as Teflon (DuPont) FEP, to more recently developed polymers, such as high temperature polyimide PMR (polymerization of monomer reactants). Additional polymers were included to explore erosion yield dependence upon chemical composition. The MISSE PEACE Polymers experiment was flown in MISSE Passive Experiment Carrier 2 (PEC 2), tray 1, on the exterior of the ISS Quest Airlock and was exposed to atomic oxygen along with solar and charged particle radiation. MISSE 2 was successfully retrieved during a space walk on July 30, 2005, during Discovery s STS-114 Return to Flight mission. Details on the specific polymers flown, flight sample fabrication, pre-flight and post-flight characterization techniques, and atomic oxygen fluence calculations are discussed along with a summary of the atomic oxygen erosion yield results. The MISSE 2 PEACE Polymers experiment is unique because it has the widest variety of polymers flown in LEO for a long duration and provides extremely valuable erosion yield data for spacecraft design purposes.
Methods for handling missing data in palliative care research.
Fielding, S; Fayers, P M; Loge, J H; Jordhøy, M S; Kaasa, S
2006-12-01
Missing data is a common problem in palliative care research due to the special characteristics (deteriorating condition, fatigue and cachexia) of the population. Using data from a palliative study, we illustrate the problems that missing data can cause and show some approaches for dealing with it. Reasons for missing data and ways to deal with missing data (including complete case analysis, imputation and modelling procedures) are explored. Possible mechanisms behind the missing data are: missing completely at random, missing at random or missing not at random. In the example study, data are shown to be missing at random. Imputation of missing data is commonly used (including last value carried forward, regression procedures and simple mean). Imputation affects subsequent summary statistics and analyses, and can have a substantial impact on estimated group means and standard deviations. The choice of imputation method should be carried out with caution and the effects reported. PMID:17148533
Family learning: the missing exemplar
NASA Astrophysics Data System (ADS)
Dentzau, Michael W.
2013-06-01
As a supporter of informal and alternative learning environments for science learning I am pleased to add to the discussion generated by Adriana Briseño-Garzón's article, "More than science: family learning in a Mexican science museum". I am keenly aware of the value of active family involvement in education in general, and science education in particular, and the portrait provided from a Mexican science museum adds to the literature of informal education through a specific sociocultural lens. I add, however, that while acknowledging the powerful the role of family in Latin American culture, the issue transcends these confines and is instead a cross-cutting topic within education as a whole. I also discuss the ease at which in an effort to call attention to cultural differences one can, by the very act, unintentionally marginalize others.
Generation of covariance data among values from a single set of experiments
Smith, D.L.
1992-12-01
Modern nuclear data evaluation methods demand detailed uncertainty information for all input results to be considered. It can be shown from basic statistical principles that provision of a covariance matrix for a set of data provides the necessary information for its proper consideration in the context of other included experimental data and/or a priori representations of the physical parameters in question. This paper examines how an experimenter should go about preparing the covariance matrix for any single experimental data set he intends to report. The process involves detailed examination of the experimental procedures, identification of all error sources (both random and systematic); and consideration of any internal discrepancies. Some specific examples are given to illustrate the methods and principles involved.
Generation of covariance data among values from a single set of experiments
Smith, D.L.
1992-01-01
Modern nuclear data evaluation methods demand detailed uncertainty information for all input results to be considered. It can be shown from basic statistical principles that provision of a covariance matrix for a set of data provides the necessary information for its proper consideration in the context of other included experimental data and/or a priori representations of the physical parameters in question. This paper examines how an experimenter should go about preparing the covariance matrix for any single experimental data set he intends to report. The process involves detailed examination of the experimental procedures, identification of all error sources (both random and systematic); and consideration of any internal discrepancies. Some specific examples are given to illustrate the methods and principles involved.
Communication: Three-fold covariance imaging of laser-induced Coulomb explosions.
Pickering, James D; Amini, Kasra; Brouard, Mark; Burt, Michael; Bush, Ian J; Christensen, Lauge; Lauer, Alexandra; Nielsen, Jens H; Slater, Craig S; Stapelfeldt, Henrik
2016-04-28
We apply a three-fold covariance imaging method to analyse previously acquired data [C. S. Slater et al., Phys. Rev. A 89, 011401(R) (2014)] on the femtosecond laser-induced Coulomb explosion of spatially pre-aligned 3,5-dibromo-3',5'-difluoro-4'-cyanobiphenyl molecules. The data were acquired using the "Pixel Imaging Mass Spectrometry" camera. We show how three-fold covariance imaging of ionic photofragment recoil trajectories can be used to provide new information about the parent ion's molecular structure prior to its Coulomb explosion. In particular, we show how the analysis may be used to obtain information about molecular conformation and provide an alternative route for enantiomer determination. PMID:27131523
Communication: Three-fold covariance imaging of laser-induced Coulomb explosions
NASA Astrophysics Data System (ADS)
Pickering, James D.; Amini, Kasra; Brouard, Mark; Burt, Michael; Bush, Ian J.; Christensen, Lauge; Lauer, Alexandra; Nielsen, Jens H.; Slater, Craig S.; Stapelfeldt, Henrik
2016-04-01
We apply a three-fold covariance imaging method to analyse previously acquired data [C. S. Slater et al., Phys. Rev. A 89, 011401(R) (2014)] on the femtosecond laser-induced Coulomb explosion of spatially pre-aligned 3,5-dibromo-3',5'-difluoro-4'-cyanobiphenyl molecules. The data were acquired using the "Pixel Imaging Mass Spectrometry" camera. We show how three-fold covariance imaging of ionic photofragment recoil trajectories can be used to provide new information about the parent ion's molecular structure prior to its Coulomb explosion. In particular, we show how the analysis may be used to obtain information about molecular conformation and provide an alternative route for enantiomer determination.
A covariance analysis tool for assessing fundamental limits of SIM pointing performance
NASA Astrophysics Data System (ADS)
Bayard, David S.; Kang, Bryan H.
2007-09-01
This paper presents a performance analysis of the instrument pointing control system for NASA's Space Interferometer Mission (SIM). SIM has a complex pointing system that uses a fast steering mirror in combination with a multirate control architecture to blend feedforward information with feedback information. A pointing covariance analysis tool (PCAT) is developed specifically to analyze systems with such complexity. The development of PCAT as a mathematical tool for covariance analysis is outlined in the paper. PCAT is then applied to studying performance of SIM's science pointing system. The analysis reveals and clearly delineates a fundamental limit that exists for SIM pointing performance. The limit is especially stringent for dim star targets. Discussion of the nature of the performance limit is provided, and methods are suggested to potentially improve pointing performance.
A Covariance Analysis Tool for Assessing Fundamental Limits of SIM Pointing Performance
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.
2007-01-01
This paper presents a performance analysis of the instrument pointing control system for NASA's Space Interferometer Mission (SIM). SIM has a complex pointing system that uses a fast steering mirror in combination with a multirate control architecture to blend feed forward information with feedback information. A pointing covariance analysis tool (PCAT) is developed specifically to analyze systems with such complexity. The development of PCAT as a mathematical tool for covariance analysis is outlined in the paper. PCAT is then applied to studying performance of SIM's science pointing system. The analysis reveals and clearly delineates a fundamental limit that exists for SIM pointing performance. The limit is especially stringent for dim star targets. Discussion of the nature of the performance limit is provided, and methods are suggested to potentially improve pointing performance.
NASA Astrophysics Data System (ADS)
Hu, Jun; Wang, Zidong; Shen, Bo; Gao, Huijun
2013-04-01
This article is concerned with the recursive finite-horizon filtering problem for a class of nonlinear time-varying systems subject to multiplicative noises, missing measurements and quantisation effects. The missing measurements are modelled by a series of mutually independent random variables obeying Bernoulli distributions with possibly different occurrence probabilities. The quantisation phenomenon is described by using the logarithmic function and the multiplicative noises are considered to account for the stochastic disturbances on the system states. Attention is focused on the design of a recursive filter such that, for all multiplicative noises, missing measurements as well as quantisation effects, an upper bound for the filtering error covariance is guaranteed and such an upper bound is subsequently minimised by properly designing the filter parameters at each sampling instant. The desired filter parameters are obtained by solving two Riccati-like difference equations that are of a recursive form suitable for online applications. Finally, two simulation examples are exploited to demonstrate the effectiveness and applicability of the proposed filter design scheme.
Tensor based missing traffic data completion with spatial-temporal correlation
NASA Astrophysics Data System (ADS)
Ran, Bin; Tan, Huachun; Wu, Yuankai; Jin, Peter J.
2016-03-01
Missing and suspicious traffic data is a major problem for intelligent transportation system, which adversely affects a diverse variety of transportation applications. Several missing traffic data imputation methods had been proposed in the last decade. It is still an open problem of how to make full use of spatial information from upstream/downstream detectors to improve imputing performance. In this paper, a tensor based method considering the full spatial-temporal information of traffic flow, is proposed to fuse the traffic flow data from multiple detecting locations. The traffic flow data is reconstructed in a 4-way tensor pattern, and the low-n-rank tensor completion algorithm is applied to impute missing data. This novel approach not only fully utilizes the spatial information from neighboring locations, but also can impute missing data in different locations under a unified framework. Experiments demonstrate that the proposed method achieves a better imputation performance than the method without spatial information. The experimental results show that the proposed method can address the extreme case where the data of a long period of one or several weeks are completely missing.
Exploitation of Geometric Occlusion and Covariance Spectroscopy in a Gamma Sensor Array
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Mitchell, Stephen; Guss, Paul; Trainham, Clifford
2013-09-01
The National Security Technologies, LLC, Remote Sensing Laboratory has recently used an array of six small-footprint (1-inch diameter by 3-inch long) cylindrical crystals of thallium-doped sodium iodide scintillators to obtain angular information from discrete gamma ray–emitting point sources. Obtaining angular information in a near-field measurement for a field-deployed gamma sensor is a requirement for radiological emergency work. Three of the sensors sit at the vertices of a 2-inch isosceles triangle, while the other three sit on the circumference of a 3-inch-radius circle centered in this triangle. This configuration exploits occlusion of sensors, correlation from Compton scattering within a detector array, and covariance spectroscopy, a spectral coincidence technique. Careful placement and orientation of individual detectors with reference to other detectors in an array can provide improved angular resolution for determining the source position by occlusion mechanism. By evaluating the values of, and the uncertainties in, the photopeak areas, efficiencies, branching ratio, peak area correction factors, and the correlations between these quantities, one can determine the precise activity of a particular radioisotope from a mixture of radioisotopes that have overlapping photopeaks that are ordinarily hard to deconvolve. The spectral coincidence technique, often known as covariance spectroscopy, examines the correlations and fluctuations in data that contain valuable information about radiation sources, transport media, and detection systems. Covariance spectroscopy enhances radionuclide identification techniques, provides directional information, and makes weaker gamma-ray emission—normally undetectable by common spectroscopic analysis—detectable. A series of experimental results using the concept of covariance spectroscopy are presented.
Experimental realization of macroscopic coherence by phase-covariant cloning of a single photon
Nagali, Eleonora; De Angelis, Tiziano; De Martini, Francesco; Sciarrino, Fabio
2007-10-15
We investigate the multiphoton states generated by high-gain optical parametric amplification of a single injected photon, polarization encoded as a 'qubit'. The experiment configuration exploits the optimal phase-covariant cloning in the high gain regime. The interference fringe pattern showing the nonlocal transfer of coherence between the injected qubit and the mesoscopic amplified output field involving up to 4000 photons has been investigated. A probabilistic method to extract full information about the multiparticle output wave function has been implemented.
MISSE 5 Thin Films Space Exposure Experiment
NASA Technical Reports Server (NTRS)
Harvey, Gale A.; Kinard, William H.; Jones, James L.
2007-01-01
The Materials International Space Station Experiment (MISSE) is a set of space exposure experiments using the International Space Station (ISS) as the flight platform. MISSE 5 is a co-operative endeavor by NASA-LaRC, United Stated Naval Academy, Naval Center for Space Technology (NCST), NASA-GRC, NASA-MSFC, Boeing, AZ Technology, MURE, and Team Cooperative. The primary experiment is performance measurement and monitoring of high performance solar cells for U.S. Navy research and development. A secondary experiment is the telemetry of this data to ground stations. A third experiment is the measurement of low-Earth-orbit (LEO) low-Sun-exposure space effects on thin film materials. Thin films can provide extremely efficacious thermal control, designation, and propulsion functions in space to name a few applications. Solar ultraviolet radiation and atomic oxygen are major degradation mechanisms in LEO. This paper is an engineering report of the MISSE 5 thm films 13 months space exposure experiment.
Huang, Yangxin; Liang, Hua; Wu, Hulin
2008-10-15
In this paper, the mechanism-based ordinary differential equation (ODE) model and the flexible semiparametric regression model are employed to identify the significant covariates for antiretroviral response in AIDS clinical trials. We consider the treatment effect as a function of three factors (or covariates) including pharmacokinetics, drug adherence and susceptibility. Both clinical and simulated data examples are given to illustrate these two different kinds of modeling approaches. We found that the ODE model is more powerful to model the mechanism-based nonlinear relationship between treatment effects and virological response biomarkers. The ODE model is also better in identifying the significant factors for virological response, although it is slightly liberal and there is a trend to include more factors (or covariates) in the model. The semiparametric mixed-effects regression model is very flexible to fit the virological response data, but it is too liberal to identify correct factors for the virological response; sometimes it may miss the correct factors. The ODE model is also biologically justifiable and good for predictions and simulations for various biological scenarios. The limitations of the ODE models include the high cost of computation and the requirement of biological assumptions that sometimes may not be easy to validate. The methodologies reviewed in this paper are also generally applicable to studies of other viruses such as hepatitis B virus or hepatitis C virus. PMID:18407583
Scalable tensor factorizations with missing data.
Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2010-04-01
The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.
Near-misses and future disaster preparedness.
Dillon, Robin L; Tinsley, Catherine H; Burns, William J
2014-10-01
Disasters garner attention when they occur, and organizations commonly extract valuable lessons from visible failures, adopting new behaviors in response. For example, the United States saw numerous security policy changes following the September 11 terrorist attacks and emergency management and shelter policy changes following Hurricane Katrina. But what about those events that occur that fall short of disaster? Research that examines prior hazard experience shows that this experience can be a mixed blessing. Prior experience can stimulate protective measures, but sometimes prior experience can deceive people into feeling an unwarranted sense of safety. This research focuses on how people interpret near-miss experiences. We demonstrate that when near-misses are interpreted as disasters that did not occur and thus provide the perception that the system is resilient to the hazard, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions. On the other hand, if near-misses can be recognized and interpreted as disasters that almost happened and thus provide the perception that the system is vulnerable to the hazard, this will counter the basic "near-miss" effect and encourage mitigation. In this article, we use these distinctions between resilient and vulnerable near-misses to examine how people come to define an event as either a resilient or vulnerable near-miss, as well as how this interpretation influences their perceptions of risk and their future preparedness behavior. Our contribution is in highlighting the critical role that people's interpretation of the prior experience has on their subsequent behavior and in measuring what shapes this interpretation. PMID:24773610
Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins
NASA Astrophysics Data System (ADS)
Tolwinski-Ward, S. E.; Wang, D.
2015-12-01
Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.
Action recognition from video using feature covariance matrices.
Guo, Kai; Ishwar, Prakash; Konrad, Janusz
2013-06-01
We propose a general framework for fast and accurate recognition of actions in video using empirical covariance matrices of features. A dense set of spatio-temporal feature vectors are computed from video to provide a localized description of the action, and subsequently aggregated in an empirical covariance matrix to compactly represent the action. Two supervised learning methods for action recognition are developed using feature covariance matrices. Common to both methods is the transformation of the classification problem in the closed convex cone of covariance matrices into an equivalent problem in the vector space of symmetric matrices via the matrix logarithm. The first method applies nearest-neighbor classification using a suitable Riemannian metric for covariance matrices. The second method approximates the logarithm of a query covariance matrix by a sparse linear combination of the logarithms of training covariance matrices. The action label is then determined from the sparse coefficients. Both methods achieve state-of-the-art classification performance on several datasets, and are robust to action variability, viewpoint changes, and low object resolution. The proposed framework is conceptually simple and has low storage and computational requirements making it attractive for real-time implementation. PMID:23508265
Terahertz imaging with missing data analysis for metamaterials characterization
NASA Astrophysics Data System (ADS)
Sokolnikov, Andre
2012-05-01
Terahertz imaging proves advantageous for metamaterials characterization since the interaction of THz radiation with the metamaterials produces clear patterns of the material. Characteristic "finger prints" of the crystal structure help locating defects, dislocations, contamination, etc. TDS-THz spectroscopy is one of the tools to control metamaterials design and manufacturing. A computational technique is suggested that provides a reliable way of calculation of the metamaterials structure parameters, spotting defects. Based on missing data analysis, the applied signal processing facilitates a better quality image while compensating for partially absent information. Results are provided.
Preventing rubella: assessing missed opportunities for immunization.
Robertson, S E; Cochi, S L; Bunn, G A; Morse, D L; Preblud, S R
1987-01-01
Cases of rubella continue to occur among adults in the United States because 10-20 per cent of persons in this age group remain susceptible. To evaluate the potential preventability of these cases, we present a method for assessing missed opportunities for rubella immunization, based on immunization recommendations of the Immunization Practices Advisory Committee (ACIP) of the US Public Health Service (PHS). Immunization programs faced with limited resources can use analysis of missed opportunities to focus on those gaps in implementation contributing most to the remaining rubella cases. PMID:3631374
Structural covariance networks in the mouse brain.
Pagani, Marco; Bifone, Angelo; Gozzi, Alessandro
2016-04-01
The presence of networks of correlation between regional gray matter volume as measured across subjects in a group of individuals has been consistently described in several human studies, an approach termed structural covariance MRI (scMRI). Complementary to prevalent brain mapping modalities like functional and diffusion-weighted imaging, the approach can provide precious insights into the mutual influence of trophic and plastic processes in health and pathological states. To investigate whether analogous scMRI networks are present in lower mammal species amenable to genetic and experimental manipulation such as the laboratory mouse, we employed high resolution morphoanatomical MRI in a large cohort of genetically-homogeneous wild-type mice (C57Bl6/J) and mapped scMRI networks using a seed-based approach. We show that the mouse brain exhibits robust homotopic scMRI networks in both primary and associative cortices, a finding corroborated by independent component analyses of cortical volumes. Subcortical structures also showed highly symmetric inter-hemispheric correlations, with evidence of distributed antero-posterior networks in diencephalic regions of the thalamus and hypothalamus. Hierarchical cluster analysis revealed six identifiable clusters of cortical and sub-cortical regions corresponding to previously described neuroanatomical systems. Our work documents the presence of homotopic cortical and subcortical scMRI networks in the mouse brain, thus supporting the use of this species to investigate the elusive biological and neuroanatomical underpinnings of scMRI network development and its derangement in neuropathological states. The identification of scMRI networks in genetically homogeneous inbred mice is consistent with the emerging view of a key role of environmental factors in shaping these correlational networks. PMID:26802512
Covariant hyperbolization of force-free electrodynamics
NASA Astrophysics Data System (ADS)
Carrasco, F. L.; Reula, O. A.
2016-04-01
Force-free electrodynamics (FFE) is a nonlinear system of equations modeling the evolution of the electromagnetic field, in the presence of a magnetically dominated relativistic plasma. This configuration arises on several astrophysical scenarios which represent exciting laboratories to understand physics in extreme regimes. We show that this system, when restricted to the correct constraint submanifold, is symmetric hyperbolic. In numerical applications, it is not feasible to keep the system in that submanifold, and so it is necessary to analyze its structure first in the tangent space of that submanifold and then in a whole neighborhood of it. As has been shown [1], a direct (or naive) formulation of this system (in the whole tangent space) results in a weakly hyperbolic system of evolution equations for which well-posedness for the initial value formulation does not follow. Using the generalized symmetric hyperbolic formalism of Geroch [2], we introduce here a covariant hyperbolization for the FFE system. In fact, in analogy to the usual Maxwell case, a complete family of hyperbolizers is found, both for the restricted system on the constraint submanifold as well as for a suitably extended system defined in a whole neighborhood of it. A particular symmetrizer among the family is then used to write down the pertaining evolution equations, in a generic (3 +1 ) decomposition on a background spacetime. Interestingly, it turns out that for a particular choice of the lapse and shift functions of the foliation, our symmetrized system reduces to the one found in [1]. Finally, we analyze the characteristic structure of the resulting evolution system.
Inflation in general covariant theory of gravity
Huang, Yongqing; Wang, Anzhong; Wu, Qiang E-mail: anzhong_wang@baylor.edu
2012-10-01
In this paper, we study inflation in the framework of the nonrelativistic general covariant theory of the Hořava-Lifshitz gravity with the projectability condition and an arbitrary coupling constant λ. We find that the Friedmann-Robterson-Walker (FRW) universe is necessarily flat in such a setup. We work out explicitly the linear perturbations of the flat FRW universe without specifying to a particular gauge, and find that the perturbations are different from those obtained in general relativity, because of the presence of the high-order spatial derivative terms. Applying the general formulas to a single scalar field, we show that in the sub-horizon regions, the metric and scalar field are tightly coupled and have the same oscillating frequencies. In the super-horizon regions, the perturbations become adiabatic, and the comoving curvature perturbation is constant. We also calculate the power spectra and indices of both the scalar and tensor perturbations, and express them explicitly in terms of the slow roll parameters and the coupling constants of the high-order spatial derivative terms. In particular, we find that the perturbations, of both scalar and tensor, are almost scale-invariant, and, with some reasonable assumptions on the coupling coefficients, the spectrum index of the tensor perturbation is the same as that given in the minimum scenario in general relativity (GR), whereas the index for scalar perturbation in general depends on λ and is different from the standard GR value. The ratio of the scalar and tensor power spectra depends on the high-order spatial derivative terms, and can be different from that of GR significantly.
The importance of covariance in nuclear data uncertainty propagation studies
Benstead, J.
2012-07-01
A study has been undertaken to investigate what proportion of the uncertainty propagated through plutonium critical assembly calculations is due to the covariances between the fission cross section in different neutron energy groups. The uncertainties on k{sub eff} calculated show that the presence of covariances between the cross section in different neutron energy groups accounts for approximately 27-37% of the propagated uncertainty due to the plutonium fission cross section. This study also confirmed the validity of employing the sandwich equation, with associated sensitivity and covariance data, instead of a Monte Carlo sampling approach to calculating uncertainties for linearly varying systems. (authors)
Hawking radiation, covariant boundary conditions, and vacuum states
Banerjee, Rabin; Kulkarni, Shailesh
2009-04-15
The basic characteristics of the covariant chiral current
Estimation of the covariance matrix of macroscopic quantum states
NASA Astrophysics Data System (ADS)
Ruppert, László; Usenko, Vladyslav C.; Filip, Radim
2016-05-01
For systems analogous to a linear harmonic oscillator, the simplest way to characterize the state is by a covariance matrix containing the symmetrically ordered moments of operators analogous to position and momentum. We show that using Stokes-like detectors without direct access to either position or momentum, the estimation of the covariance matrix of a macroscopic signal is still possible using interference with a classical noisy and low-intensity reference. Such a detection technique will allow one to estimate macroscopic quantum states of electromagnetic radiation without a coherent high-intensity local oscillator. It can be directly applied to estimate the covariance matrix of macroscopically bright squeezed states of light.
Strategies for Handling Missing Data in Electronic Health Record Derived Data
Wells, Brian J.; Chagin, Kevin M.; Nowacki, Amy S.; Kattan, Michael W.
2013-01-01
Electronic health records (EHRs) present a wealth of data that are vital for improving patient-centered outcomes, although the data can present significant statistical challenges. In particular, EHR data contains substantial missing information that if left unaddressed could reduce the validity of conclusions drawn. Properly addressing the missing data issue in EHR data is complicated by the fact that it is sometimes difficult to differentiate between missing data and a negative value. For example, a patient without a documented history of heart failure may truly not have disease or the clinician may have simply not documented the condition. Approaches for reducing missing data in EHR systems come from multiple angles, including: increasing structured data documentation, reducing data input errors, and utilization of text parsing / natural language processing. This paper focuses on the analytical approaches for handling missing data, primarily multiple imputation. The broad range of variables available in typical EHR systems provide a wealth of information for mitigating potential biases caused by missing data. The probability of missing data may be linked to disease severity and healthcare utilization since unhealthier patients are more likely to have comorbidities and each interaction with the health care system provides an opportunity for documentation. Therefore, any imputation routine should include predictor variables that assess overall health status (e.g. Charlson Comorbidity Index) and healthcare utilization (e.g. number of encounters) even when these comorbidities and patient encounters are unrelated to the disease of interest. Linking the EHR data with other sources of information (e.g. National Death Index and census data) can also provide less biased variables for imputation. Additional methodological research with EHR data and improved epidemiological training of clinical investigators is warranted. PMID:25848578
Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R
2013-03-01
Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. PMID:23280157
Gebrehiwot, Yirgu; Tewolde, Birukkidus T
2014-10-01
The present study aimed to initiate facility based review of maternal deaths and near misses as part of the Ethiopian effort to reduce maternal mortality and achieve United Nations Millennium Development Goals 4 and 5. An in-depth review of all maternal deaths and near misses among women who visited 10 hospitals in four regions of Ethiopia was conducted between May 2011 and October 2012 as part of the FIGO LOGIC initiative. During the study period, a total of 2774 cases (206 deaths and 2568 near misses) were reviewed. The ratio of maternal deaths to near misses was 1:12 and the overall maternal death rate was 728 per 100 000 live births. Socioeconomic factors associated with maternal mortality included illiteracy 1672 (60.3%) and lack of employment outside the home 2098 (75.6%). In all, 1946 (70.2%) women arrived at hospital after they had developed serious complications owing to issues such as lack of transportation. Only 1223 (44.1%) women received prenatal follow-up and 157 (76.2%) deaths were attributed to direct obstetric causes. Based on the findings, facilities adopted a number of quality improvement measures such as providing 24-hour services, and making ambulances available. Integrating review of maternal deaths and near misses into regular practice provides accurate information on causes of maternal deaths and near misses and also improves quality of care in facilities. PMID:25261109
Mutually unbiased bases as minimal Clifford covariant 2-designs
NASA Astrophysics Data System (ADS)
Zhu, Huangjun
2015-06-01
Mutually unbiased bases (MUBs) are interesting for various reasons. The most attractive example of (a complete set of) MUBs is the one constructed by Ivanović as well as Wootters and Fields, which is referred to as the canonical MUB. Nevertheless, little is known about anything that is unique to this MUB. We show that the canonical MUB in any prime power dimension is uniquely determined by an extremal orbit of the (restricted) Clifford group except in dimension 3, in which case the orbit defines a special symmetric informationally complete measurement (SIC), known as the Hesse SIC. Here the extremal orbit is the orbit with the smallest number of pure states. Quite surprisingly, this characterization does not rely on any concept that is related to bases or unbiasedness. As a corollary, the canonical MUB is the unique minimal 2-design covariant with respect to the Clifford group except in dimension 3. In addition, these MUBs provide an infinite family of highly symmetric frames and positive-operator-valued measures (POVMs), which are of independent interest.
Covariant Spectator Theory of np scattering: Isoscalar interaction currents
Gross, Franz L.
2014-06-01
Using the Covariant Spectator Theory (CST), one boson exchange (OBE) models have been found that give precision fits to low energy $np$ scattering and the deuteron binding energy. The boson-nucleon vertices used in these models contain a momentum dependence that requires a new class of interaction currents for use with electromagnetic interactions. Current conservation requires that these new interaction currents satisfy a two-body Ward-Takahashi (WT), and using principals of {\\it simplicity\\/} and {\\it picture independence\\/}, these currents can be uniquely determined. The results lead to general formulae for a two-body current that can be expressed in terms of relativistic $np$ wave functions, ${\\it \\Psi}$, and two convenient truncated wave functions, ${\\it \\Psi}^{(2)}$ and $\\widehat {\\it \\Psi}$, which contain all of the information needed for the explicit evaluation of the contributions from the interaction current. These three wave functions can be calculated from the CST bound or scattering state equations (and their off-shell extrapolations). A companion paper uses this formalism to evaluate the deuteron magnetic moment.
Covariance analysis of symmetry energy observables from heavy ion collision
NASA Astrophysics Data System (ADS)
Zhang, Yingxun; Tsang, M. B.; Li, Zhuxia
2015-10-01
Using covariance analysis, we quantify the correlations between the interaction parameters in a transport model and the observables commonly used to extract information of the Equation of State of Asymmetric Nuclear Matter in experiments. By simulating 124Sn + 124Sn, 124Sn + 112Sn and 112Sn + 112Sn reactions at beam energies of 50 and 120 MeV per nucleon, we have identified that the nucleon effective mass splitting is most strongly correlated to the neutrons and protons yield ratios with high kinetic energy from central collisions especially at high incident energy. The best observable to determine the slope of the symmetry energy, L, at saturation density is the isospin diffusion observable even though the correlation is not very strong (∼0.7). Similar magnitude of correlation but opposite in sign exists for isospin diffusion and nucleon isoscalar effective mass. At 120 MeV/u, the effective mass splitting and the isoscalar effective mass also have opposite correlation for the double n / p and isoscaling p / p yield ratios. By combining data and simulations at different beam energies, it should be possible to place constraints on the slope of symmetry energy (L) and effective mass splitting with reasonable uncertainties.
Application of Covariance Data to Criticality Safety Data Validation
Broadhead, B.L.; Hopper, C.M.; Parks, C.V.
1999-11-13
The use of cross-section covariance data has long been a key part of traditional sensitivity and uncertainty analyses (S/U). This paper presents the application of S/U methodologies to the data validation tasks of a criticality safety computational study. The S/U methods presented are designed to provide a formal means of establishing the area (or range) of applicability for criticality safety data validation studies. The goal of this work is to develop parameters that can be used to formally determine the ''similarity'' of a benchmark experiment (or a set of benchmark experiments individually) and the application area that is to be validated. These parameters are termed D parameters, which represent the differences by energy group of S/U-generated sensitivity profiles, and ck parameters, which are the correlation coefficients, each of which gives information relative to the similarity between pairs of selected systems. The application of a Generalized Linear Least-Squares Methodology ( GLLSM) tool to criticality safety validation tasks is also described in this paper. These methods and guidelines are also applied to a sample validation for uranium systems with enrichments greater than 5 wt %.
Ocean spectral data assimilation without background error covariance matrix
NASA Astrophysics Data System (ADS)
Chu, Peter C.; Fan, Chenwu; Margolina, Tetyana
2016-08-01
Predetermination of background error covariance matrix B is challenging in existing ocean data assimilation schemes such as the optimal interpolation (OI). An optimal spectral decomposition (OSD) has been developed to overcome such difficulty without using the B matrix. The basis functions are eigenvectors of the horizontal Laplacian operator, pre-calculated on the base of ocean topography, and independent on any observational data and background fields. Minimization of analysis error variance is achieved by optimal selection of the spectral coefficients. Optimal mode truncation is dependent on the observational data and observational error variance and determined using the steep-descending method. Analytical 2D fields of large and small mesoscale eddies with white Gaussian noises inside a domain with four rigid and curved boundaries are used to demonstrate the capability of the OSD method. The overall error reduction using the OSD is evident in comparison to the OI scheme. Synoptic monthly gridded world ocean temperature, salinity, and absolute geostrophic velocity datasets produced with the OSD method and quality controlled by the NOAA National Centers for Environmental Information (NCEI) are also presented.
Jumbled genomes: missing Apicomplexan synteny.
DeBarry, Jeremy D; Kissinger, Jessica C
2011-10-01
Whole-genome comparisons provide insight into genome evolution by informing on gene repertoires, gene gains/losses, and genome organization. Most of our knowledge about eukaryotic genome evolution is derived from studies of multicellular model organisms. The eukaryotic phylum Apicomplexa contains obligate intracellular protist parasites responsible for a wide range of human and veterinary diseases (e.g., malaria, toxoplasmosis, and theileriosis). We have developed an in silico protein-encoding gene based pipeline to investigate synteny across 12 apicomplexan species from six genera. Genome rearrangement between lineages is extensive. Syntenic regions (conserved gene content and order) are rare between lineages and appear to be totally absent across the phylum, with no group of three genes found on the same chromosome and in the same order within 25 kb up- and downstream of any orthologous genes. Conserved synteny between major lineages is limited to small regions in Plasmodium and Theileria/Babesia species, and within these conserved regions, there are a number of proteins putatively targeted to organelles. The observed overall lack of synteny is surprising considering the divergence times and the apparent absence of transposable elements (TEs) within any of the species examined. TEs are ubiquitous in all other groups of eukaryotes studied to date and have been shown to be involved in genomic rearrangements. It appears that there are different criteria governing genome evolution within the Apicomplexa relative to other well-studied unicellular and multicellular eukaryotes. PMID:21504890
An Introduction to Modern Missing Data Analyses
ERIC Educational Resources Information Center
Baraldi, Amanda N.; Enders, Craig K.
2010-01-01
A great deal of recent methodological research has focused on two modern missing data analysis methods: maximum likelihood and multiple imputation. These approaches are advantageous to traditional techniques (e.g. deletion and mean imputation techniques) because they require less stringent assumptions and mitigate the pitfalls of traditional…
The Feeling Words Curriculum: The Missing Link.
ERIC Educational Resources Information Center
Maurer, Marvin
The Feeling Words Curriculum, a curriculum that integrates the cognitive and affective domains in one course of study, is described in this paper. The opening sections explain how "feeling words," key vocabulary terms, are used to provide the missing link from one person's life to another's. Stressing the importance of helping students to develop…
The Missing Link in ESL Teacher Training.
ERIC Educational Resources Information Center
Justen, Edward F.
1984-01-01
Many sincere, well-prepared, and technically qualified teachers of English as a second language (ESL) are awkward in class, stressed, and insecure, showing little excitement or energy. The missing element in training programs for ESL teachers is a good basic course in drama. It is an expression at both the visual and auditory levels, is a medium…
75 FR 53631 - Missing Parts Practice
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-01
... publications to the body of prior art, and by removing from the USPTO's workload those nonprovisional... are covered under OMB Control Number 0651-0032 Initial Patent Applications. Since the requests for... Patent and Trademark Office Missing Parts Practice ACTION: Proposed collection; comment request....
How Preschool Children Understand Missing Complement Subjects.
ERIC Educational Resources Information Center
Maratsos, Michael P.
Two studies investigated preschool children's comprehension of the missing subject of infinitival complement clauses. In the first study, use of a Surface Structure Minimal Distance principle of the type outlined by C. Chomsky was distinguished from use of a Semantic Role Principle. Preschoolers acted out sentences in which the use of the two…
The Missing Link: Research on Teacher Education
ERIC Educational Resources Information Center
Wiens, Peter D.
2012-01-01
Teacher education has recently come under attack for its perceived lack of efficacy in preparing teachers for classroom duty. A lack of comprehensive research in teacher education makes it difficult to understand the effects of teacher education programs on student learning. There is a missing link between what happens in teacher education…
Progress of Covariance Evaluation at the China Nuclear Data Center
Xu, R.; Zhang, Q.; Zhang, Y.; Liu, T.; Ge, Z.; Lu, H.; Sun, Z.; Yu, B.; Tang, G.
2015-01-15
Covariance evaluations at the China Nuclear Data Center focus on the cross sections of structural materials and actinides in the fast neutron energy range. In addition to the well-known Least-squares approach, a method based on the analysis of the sources of experimental uncertainties is especially introduced to generate a covariance matrix for a particular reaction for which multiple measurements are available. The scheme of the covariance evaluation flow is presented, and an example of n+{sup 90}Zr is given to illuminate the whole procedure. It is proven that the accuracy of measurements can be properly incorporated into the covariance and the long-standing small uncertainty problem can be avoided.
True covariance simulation of the EUVE update filter
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, R. R.
1989-01-01
A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
AFCI-2.0 Neutron Cross Section Covariance Library
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices
Cai, Tony; Ma, Zongming; Wu, Yihong
2014-01-01
This paper considers a sparse spiked covariancematrix model in the high-dimensional setting and studies the minimax estimation of the covariance matrix and the principal subspace as well as the minimax rank detection. The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariance matrices. We also establish the minimax rate under the spectral norm for estimating the principal subspace, the primary object of interest in principal component analysis. In addition, the optimal rate for the rank detection boundary is obtained. This result also resolves the gap in a recent paper by Berthet and Rigollet [2] where the special case of rank one is considered. PMID:26257453
Missing radiographic data handling in randomized clinical trials in rheumatoid arthritis.
Huang, Xiaohong; Jiao, Lixia; Wei, Lynn; Quan, Hui; Teoh, Leah; Koch, Gary G
2013-01-01
In recent years, there has been increasing interest in compounds that have potential to slow down the structural joint damage in rheumatoid arthritis (RA) patients. Radiographs are instrumental in assessing structure damage in RA. Radiographic analyses results have become essential in establishing a "delay in structural progression" claim in newly developed agents for the treatment of RA. It is well known that the radiographic progression data generally follow a nonnormal distribution that is loaded with excessive zeros. A special concern about the radiographic data analyses is the handling of the seemingly high rate of missing values due to dropout or unreadable images. There are no uniform ways to handle missing radiographic data, and such data usually show considerable sensitivity to the imputation method chosen under the complexity of the nonnormal data and the unique missing mechanism. In this research, we proposed both an innovative multiple-imputation algorithm and a novel method called the mean rank imputation method under the nonparametric framework for sensitivity analyses. A simulation study was designed using rank analysis of covariance (ANCOVA) to extensively assess and compare the finite performance of these two new methods along with four other missing data handling methods previously used in the RA trials, namely, linear extrapolation, last observation carried-forward (LOCF), median quartile bin imputation, and median imputation under various settings. Our simulation results suggest that the multiple-imputation algorithm, providing an mITT analysis population, yields an inflated type I error and artificially good power. The proposed mean rank imputation method, following a true ITT principle, both is powerful and maintains type I error at the nominal level. PMID:24138441
Regression models for mixed discrete and continuous responses with potentially missing values.
Fitzmaurice, G M; Laird, N M
1997-03-01
In this paper a likelihood-based method for analyzing mixed discrete and continuous regression models is proposed. We focus on marginal regression models, that is, models in which the marginal expectation of the response vector is related to covariates by known link functions. The proposed model is based on an extension of the general location model of Olkin and Tate (1961, Annals of Mathematical Statistics 32, 448-465), and can accommodate missing responses. When there are no missing data, our particular choice of parameterization yields maximum likelihood estimates of the marginal mean parameters that are robust to misspecification of the association between the responses. This robustness property does not, in general, hold for the case of incomplete data. There are a number of potential benefits of a multivariate approach over separate analyses of the distinct responses. First, a multivariate analysis can exploit the correlation structure of the response vector to address intrinsically multivariate questions. Second, multivariate test statistics allow for control over the inflation of the type I error that results when separate analyses of the distinct responses are performed without accounting for multiple comparisons. Third, it is generally possible to obtain more precise parameter estimates by accounting for the association between the responses. Finally, separate analyses of the distinct responses may be difficult to interpret when there is nonresponse because different sets of individuals contribute to each analysis. Furthermore, separate analyses can introduce bias when the missing responses are missing at random (MAR). A multivariate analysis can circumvent both of these problems. The proposed methods are applied to two biomedical datasets. PMID:9147588
Nonlinear effects in the correlation of tracks and covariance propagation
NASA Astrophysics Data System (ADS)
Sabol, C.; Hill, K.; Alfriend, K.; Sukut, T.
2013-03-01
Even though there are methods for the nonlinear propagation of the covariance the propagation of the covariance in current operational programs is based on the state transition matrix of the 1st variational equations, thus it is a linear propagation. If the measurement errors are zero mean Gaussian, the orbit errors, statistically represented by the covariance, are Gaussian. When the orbit errors become too large they are no longer Gaussian and not represented by the covariance. One use of the covariance is the association of uncorrelated tracks (UCTs). A UCT is an object tracked by a space surveillance system that does not correlate to another object in the space object data base. For an object to be entered into the data base three or more tracks must be correlated. Associating UCTs is a major challenge for a space surveillance system since every object entered into the space object catalog begins as a UCT. It has been proved that if the orbit errors are Gaussian, the error ellipsoid represented by the covariance is the optimum association volume. When the time between tracks becomes large, hours or even days, the orbit errors can become large and are no longer Gaussian, and this has a negative effect on the association of UCTs. This paper further investigates the nonlinear effects on the accuracy of the covariance for use in correlation. The use of the best coordinate system and the unscented Kalman Filter (UKF) for providing a more accurate covariance are investigated along with assessing how these approaches would result in the ability to correlate tracks that are further separated in time.
Are the invariance principles really truly Lorentz covariant?
Arunasalam, V.
1994-02-01
It is shown that some sections of the invariance (or symmetry) principles such as the space reversal symmetry (or parity P) and time reversal symmetry T (of elementary particle and condensed matter physics, etc.) are not really truly Lorentz covariant. Indeed, I find that the Dirac-Wigner sense of Lorentz invariance is not in full compliance with the Einstein-Minkowski reguirements of the Lorentz covariance of all physical laws (i.e., the world space Mach principle).
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
Protein remote homology detection based on auto-cross covariance transformation.
Liu, Xuan; Zhao, Lijie; Dong, Qiwen
2011-08-01
Protein remote homology detection is a critical step toward annotating its structure and function. Supervised learning algorithms such as support vector machine are currently the most accurate methods. The position-specific score matrices (PSSMs) contain wealthy information about the evolutionary relationship of proteins. However, the PSSMs often have different lengths, which are difficult to be used by machine-learning methods. In this study, a simple, fast and powerful method is presented for protein remote homology detection, which combines support vector machine with auto-cross covariance transformation. The PSSMs are converted into a series of fixed-length vectors by auto-cross covariance transformation and these vectors are then input to a support vector machine classifier for remote homology detection. The sequence-order effects can be effectively captured by this scheme. Experiments are performed on well-established datasets, and the remote homology is simulated at the superfamily and the fold level, respectively. The results show that the proposed method, referred to as ACCRe, is comparable or even better than the state-of-the-art methods in terms of detection performance, and its time complexity is superior to those of other profile-based SVM methods. The auto-cross covariance transformation provides a novel way for the usage of evolutionary information, which can be widely used for protein-level studies. PMID:21664609
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Covariance fitting of highly-correlated data in lattice QCD
NASA Astrophysics Data System (ADS)
Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong
2013-07-01
We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.
Summary of the Workshop on Neutron Cross Section Covariances
Smith, Donald L.
2008-12-15
A Workshop on Neutron Cross Section Covariances was held from June 24-27, 2008, in Port Jefferson, New York. This Workshop was organized by the National Nuclear Data Center, Brookhaven National Laboratory, to provide a forum for reporting on the status of the growing field of neutron cross section covariances for applications and for discussing future directions of the work in this field. The Workshop focused on the following four major topical areas: covariance methodology, recent covariance evaluations, covariance applications, and user perspectives. Attention was given to the entire spectrum of neutron cross section covariance concerns ranging from light nuclei to the actinides, and from the thermal energy region to 20 MeV. The papers presented at this conference explored topics ranging from fundamental nuclear physics concerns to very specific applications in advanced reactor design and nuclear criticality safety. This paper provides a summary of this workshop. Brief comments on the highlights of each Workshop contribution are provided. In addition, a perspective on the achievements and shortcomings of the Workshop as well as on the future direction of research in this field is offered.
The Performance Analysis Based on SAR Sample Covariance Matrix
Erten, Esra
2012-01-01
Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976
Incorrect support and missing center tolerances of phasing algorithms
Huang, Xiaojing; Nelson, Johanna; Steinbrener, Jan; Kirz, Janos; Turner, Joshua J.; Jacobsen, Chris
2010-01-01
In x-ray diffraction microscopy, iterative algorithms retrieve reciprocal space phase information, and a real space image, from an object's coherent diffraction intensities through the use of a priori information such as a finite support constraint. In many experiments, the object's shape or support is not well known, and the diffraction pattern is incompletely measured. We describe here computer simulations to look at the effects of both of these possible errors when using several common reconstruction algorithms. Overly tight object supports prevent successful convergence; however, we show that this can often be recognized through pathological behavior of the phase retrieval transfermore » function. Dynamic range limitations often make it difficult to record the central speckles of the diffraction pattern. We show that this leads to increasing artifacts in the image when the number of missing central speckles exceeds about 10, and that the removal of unconstrained modes from the reconstructed image is helpful only when the number of missing central speckles is less than about 50. In conclusion, this simulation study helps in judging the reconstructability of experimentally recorded coherent diffraction patterns.« less
de los Campos, Gustavo; Gianola, Daniel
2007-01-01
Multivariate linear models are increasingly important in quantitative genetics. In high dimensional specifications, factor analysis (FA) may provide an avenue for structuring (co)variance matrices, thus reducing the number of parameters needed for describing (co)dispersion. We describe how FA can be used to model genetic effects in the context of a multivariate linear mixed model. An orthogonal common factor structure is used to model genetic effects under Gaussian assumption, so that the marginal likelihood is multivariate normal with a structured genetic (co)variance matrix. Under standard prior assumptions, all fully conditional distributions have closed form, and samples from the joint posterior distribution can be obtained via Gibbs sampling. The model and the algorithm developed for its Bayesian implementation were used to describe five repeated records of milk yield in dairy cattle, and a one common FA model was compared with a standard multiple trait model. The Bayesian Information Criterion favored the FA model. PMID:17897592
Lai, Tze Leung; Shih, Mei-Chiung; Wong, Samuel P
2006-02-01
By combining Laplace's approximation and Monte Carlo methods to evaluate multiple integrals, this paper develops a new approach to estimation in nonlinear mixed effects models that are widely used in population pharmacokinetics and pharmacodynamics. Estimation here involves not only estimating the model parameters from Phase I and II studies but also using the fitted model to estimate the concentration versus time curve or the drug effects of a subject who has covariate information but sparse measurements. Because of its computational tractability, the proposed approach can model the covariate effects nonparametrically by using (i) regression splines or neural networks as basis functions and (ii) AIC or BIC for model selection. Its computational and statistical advantages are illustrated in simulation studies and in Phase I trials. PMID:16402288
Quantifying the Effect of Component Covariances in CMB Extraction from Multi-frequency Data
NASA Technical Reports Server (NTRS)
Phillips, Nicholas G.
2008-01-01
Linear combination methods provide a global method for component separation of multi-frequency data. We present such a method that allows for consideration of possible covariances between the desired cosmic microwave background signal and various foreground signals that are also present. We also recover information on the foregrounds including the number of foregrounds, their spectra and templates. In all this, the covariances, which we would only expect to vanish 'in the mean' are included as parameters expressing the fundamental uncertainty due to this type of cosmic variance. When we make the reasonable assumption that the CMB is Gaussian, we can compute both a mean recovered CMB map and also an RMS error map, The mean map coincides with WMAP's Internal Linear Combination map.
NASA Astrophysics Data System (ADS)
Trainham, R.; Tinsley, J.
2014-06-01
Energy asymmetry of inter-detector crosstalk from Compton scattering can be exploited to infer the direction to a gamma source. A covariance approach extracts the correlated crosstalk from data streams to estimate matched signals from Compton gammas split over two detectors. On a covariance map the signal appears as an asymmetric cross diagonal band with axes intercepts at the full photo-peak energy of the original gamma. The asymmetry of the crosstalk band can be processed to determine the direction to the radiation source. The technique does not require detector shadowing, masking, or coded apertures, thus sensitivity is not sacrificed to obtain the directional information. An angular precision of better than 1° of arc is possible, and processing of data streams can be done in real time with very modest computing hardware.
Trainham, R; Tinsley, J
2014-06-01
Energy asymmetry of inter-detector crosstalk from Compton scattering can be exploited to infer the direction to a gamma source. A covariance approach extracts the correlated crosstalk from data streams to estimate matched signals from Compton gammas split over two detectors. On a covariance map the signal appears as an asymmetric cross diagonal band with axes intercepts at the full photo-peak energy of the original gamma. The asymmetry of the crosstalk band can be processed to determine the direction to the radiation source. The technique does not require detector shadowing, masking, or coded apertures, thus sensitivity is not sacrificed to obtain the directional information. An angular precision of better than 1° of arc is possible, and processing of data streams can be done in real time with very modest computing hardware. PMID:24985816
Missing Black Holes Driven Out
NASA Astrophysics Data System (ADS)
2004-05-01
difficult task to find them. Because of the obscuring effect of dust, they cannot be found through standard (i.e. in the visible) methods of quasar selection. The only way to hope to drive them out of their bushes is to detect them through their hard X-rays which are able to penetrate through the torus. But this requires that astronomers can analyse and cross-correlate in a very efficient way the observations from several space- and ground-based observatories, which together span the entire range of wavelengths. Type-2 quasars can indeed be identified as the only objects appearing very red and at the same time emitting strongly in X-rays. Virtual Observatories ESO PR Photo 17/04 ESO PR Photo 17/04 [Preview - JPEG: 400 x 415 pix - 57k] [Normal - JPEG: 800 x 829 pix - 764k] [FullRes - JPEG: 1702 x 1764 pix - 2.0M] Caption: ESO PR Photo 17/04 shows the Astrophysical Virtual Observatory (AVO) visualisation interface; it interface is based on Centre de Donneés Astronomiques de Strasbourg's Aladin. AVO devises new methods for accessing and describing data in a way that allows images, spectra and information from different sources to work together seamlessly. This is where Virtual Observatories can play a decisive role. Major breakthroughs in telescope, detector, and computer technology now allow astronomical surveys to produce massive amounts of images, spectra, and catalogues. These datasets cover the sky at all wavelengths from gamma- and X-rays, optical, infrared, to radio waves. Virtual Observatories are an international, community-based initiative, to allow global electronic access to available astronomical data in a seamless and transparent way. The Astrophysical Virtual Observatory project (AVO) ([1]) is the effort in Virtual Observatories of the European astronomical community. Funded jointly by the European Commission and six participating European organisations, it is led by the European Southern Observatory (ESO). The AVO science team headed by Paolo Padovani (ST
Missed injuries in trauma patients: A literature review
Pfeifer, Roman; Pape, Hans-Christoph
2008-01-01
Background Overlooked injuries and delayed diagnoses are still common problems in the treatment of polytrauma patients. Therefore, ongoing documentation describing the incidence rates of missed injuries, clinically significant missed injuries, contributing factors and outcome is necessary to improve the quality of trauma care. This review summarizes the available literature on missed injuries, focusing on overlooked muscoloskeletal injuries. Methods Manuscripts dealing with missed injuries after trauma were reviewed. The following search modules were selected in PubMed: Missed injuries, Delayed diagnoses, Trauma, Musculoskeletal injuires. Three time periods were differentiated: (n = 2, 1980–1990), (n = 6, 1990–2000), and (n = 9, 2000-Present). Results We found a wide spread distribution of missed injuries and delayed diagnoses incidence rates (1.3% to 39%). Approximately 15 to 22.3% of patients with missed injuries had clinically significant missed injuries. Furthermore, we observed a decrease of missed pelvic and hip injuries within the last decade. Conclusion The lack of standardized studies using comparable definitions for missed injuries and clinically significant missed injuries call for further investigations, which are necessary to produce more reliable data. Furthermore, improvements in diagnostic techniques (e.g. the use of multi-slice CT) may lead to a decreased incidence of missed pelvic injuries. Finally, the standardized tertiary trauma survey is vitally important in the detection of clinically significant missed injuries and should be included in trauma care. PMID:18721480
NASA Astrophysics Data System (ADS)
Boudhina, Nissaf; Prévot, Laurent; Zitouna Chebbi, Rim; Mekki, Insaf; Jacob, Frédéric; Ben Mechlia, Netij; Masmoudi, Moncef
2015-04-01
Hilly watersheds are widespread throughout coastal areas around the Mediterranean Basin. They experience agricultural intensification since hilly topographies allow water-harvesting techniques that compensate for rainfall storage, water being a strong limiting factor for crop production. Their fragility is likely to increase with climate change and human pressure. Within semi-arid hilly watershed conditions, evapotranspiration (ETR) is a major term of both land surface energy and water balances. Several methods allow determining ETR, based either on direct measurements, or on estimations and forecast from weather and soil moisture data using simulation models. Among these methods, eddy covariance technique is based on high-frequency measurements of fluctuations of wind speed and air temperature / humidity, to directly determine the convective fluxes between land surface and atmosphere. In spite of experimental and instrumental progresses, datasets of eddy covariance measurements often experience large portions of missing data. The latter results from energy power failure, experimental maintenance, instrumental troubles such as krypton hygrometer malfunctioning because of air humidity, or quality assessment based filtering in relation to spatial homogeneity and temporal stationarity of turbulence within surface boundary layer. This last item is all the more important as hilly topography, when combined with strong winds, tends to increase turbulence within surface boundary layer. The main objective of this study is to establish gap-filling procedures to provide complete chronicles of eddy-covariance measurements of crop evapotranspiration (ETR) within a hilly agricultural watershed. We focus on the specific conditions induced by the combination of hilly topography and wind direction, by discriminating between upslope and downslope winds. The experiment was set for three field configurations within hilly conditions: two flux measurement stations (A, B) were installed
Nuclear Data Target Accuracies for Generation-IV Systems Based on the use of New Covariance Data
G. Palmiotti; M. Salvatores; M. Assawaroongruengchot; M. Herman; P. Oblozinsky; C. Mattoon
2010-04-01
A target accuracy assessment using new available covariance data, the AFCI 1.2 covariance data, has been carried out. At the same time, the more theoretical issue of taking into account correlation terms in target accuracy assessment studies has been deeply investigated. The impact of correlation terms is very significant in target accuracy assessment evaluation and can produce very stringent requirements on nuclear data. For this type of study a broader energy group structure should be used, in order to smooth out requirements and provide better feedback information to evaluators and cross section measurement experts. The main difference in results between using BOLNA or AFCI 1.2 covariance data are related to minor actinides, minor Pu isotopes, structural materials (in particular Fe56), and coolant isotopes (Na23) accuracy requirements.
Nuclear Data Target Accuracies for Generation-IV Systems Based on the Use of New Covariance Data
Palmiotti, G.; Herman, M.; Palmiotti,G.; Assawaroongruengchot,M.; Salvatores,M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.
2011-08-01
A target accuracy assessment using new available covariance data, the AFCI 1.2 covariance data, has been carried out. At the same time, the more theoretical issue of taking into account correlation terms in target accuracy assessment studies has been deeply investigated. The impact of correlation terms is very significant in target accuracy assessment evaluation and can produce very stringent requirements on nuclear data. For this type of study a broader energy group structure should be used, in order to smooth out requirements and provide better feedback information to evaluators and cross section measurement experts. The main difference in results between using BOLNA or AFCI 1.2 covariance data are related to minor actinides, minor Pu isotopes, structural materials (in particular Fe56), and coolant isotopes (Na23) accuracy requirements.
Fill in the Blanks: A Tale of Data Gone Missing.
Jupiter, Daniel C
2016-01-01
In studies, we often encounter patients for whom data is missing. More than a nuisance, such missing data can seriously impact our analyses. I discuss here some methods to handle these situations. PMID:26810125
40 CFR 98.45 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...
40 CFR 98.45 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...
40 CFR 98.45 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...
40 CFR 98.45 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...
40 CFR 98.45 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...
Missing Children: A Close Look at the Issue.
ERIC Educational Resources Information Center
Davidson, Howard
1986-01-01
Examines the statistical dilemma and the controversy over media exposure concerning the issue of missing (abducted) children. Also looks at the role of the National Center for Missing and Exploited Children in relation to this issue. (Author/BB)
New cyberinfrastructure for studying land-atmosphere interactions using eddy covariance techniques
NASA Astrophysics Data System (ADS)
Jaimes, A.; Salayandia, L.; Gallegos, I.; Gates, A. Q.; Tweedie, C.
2010-12-01
limitations on ecological instrumentation output that affect data uncertainty. The objective was to parameterize and capture scientific knowledge necessary to typify data quality associated with eddy covariance methods. The process was documented by developing workflow driven ontologies, which can be used to disseminate how the Eddy Covariance Data is being captured and processed at JER, and also to automate the capture of provenance meta-data. Ultimately, we hope to develop scalable Eddy Covariance data capturing systems that offer additional information about how the data was captured, which hopefully will result in data sets with a higher degree of re-usability.
Optical Coatings and Surfaces in Space: MISSE
NASA Technical Reports Server (NTRS)
Stewart, Alan F.; Finckenor, Miria M.
2006-01-01
The space environment presents some unique problems for optics. Components must be designed to survive variations in temperature, exposure to ultraviolet, particle radiation, atomic oxygen and contamination from the immediate environment. To determine the importance of these phenomena, a series of passive exposure experiments have been conducted which included, among others, the Long Duration Exposure Facility (LDEF, 1985- 1990), the Passive Optical Sample Assembly (POSA, 1996- 1997) and most recently, the Materials on the International Space Station Experiment (MISSE, 2001 - 2005). The MISSE program benefited greatly from past experience so that at the conclusion of this 4 year mission, samples which remained intact were in remarkable condition. This study will review data from different aspects of this experiment with emphasis on optical properties and performance.
Detrended fluctuation analysis with missing data
NASA Astrophysics Data System (ADS)
Løvsletten, Ola
2016-04-01
Detrended fluctuation analysis (DFA) has become a popular tool for studying the scaling behavior of time series in a wide range of scientific disciplines. Many geophysical time series contain "gaps", meaning that some observations of a regularly sampled time series are missing. We show how DFA can be modified to properly handle signals with missing data without the need for interpolation or re-sampling. A new result is presented which states that one can write the fluctuation function in terms of a weighted sum of variograms (also known as second-order structure functions). In the presence of gaps this new estimator is equal in expectation to the fluctuation function in the gap-free case. A small-sample Monte Carlo study, as well as theoretical argument, show the superiority of the proposed method against mean-filling, linear interpolation and resampling.
Mercieca-Bebber, Rebecca; Palmer, Michael J; Brundage, Michael; Stockler, Martin R; King, Madeleine T
2016-01-01
Objectives Patient-reported outcomes (PROs) provide important information about the impact of treatment from the patients' perspective. However, missing PRO data may compromise the interpretability and value of the findings. We aimed to report: (1) a non-technical summary of problems caused by missing PRO data; and (2) a systematic review by collating strategies to: (A) minimise rates of missing PRO data, and (B) facilitate transparent interpretation and reporting of missing PRO data in clinical research. Our systematic review does not address statistical handling of missing PRO data. Data sources MEDLINE and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases (inception to 31 March 2015), and citing articles and reference lists from relevant sources. Eligibility criteria English articles providing recommendations for reducing missing PRO data rates, or strategies to facilitate transparent interpretation and reporting of missing PRO data were included. Methods 2 reviewers independently screened articles against eligibility criteria. Discrepancies were resolved with the research team. Recommendations were extracted and coded according to framework synthesis. Results 117 sources (55% discussion papers, 26% original research) met the eligibility criteria. Design and methodological strategies for reducing rates of missing PRO data included: incorporating PRO-specific information into the protocol; carefully designing PRO assessment schedules and defining termination rules; minimising patient burden; appointing a PRO coordinator; PRO-specific training for staff; ensuring PRO studies are adequately resourced; and continuous quality assurance. Strategies for transparent interpretation and reporting of missing PRO data include utilising auxiliary data to inform analysis; transparently reporting baseline PRO scores, rates and reasons for missing data; and methods for handling missing PRO data. Conclusions The instance of missing PRO data and its
On Testability of Missing Data Mechanisms in Incomplete Data Sets
ERIC Educational Resources Information Center
Raykov, Tenko
2011-01-01
This article is concerned with the question of whether the missing data mechanism routinely referred to as missing completely at random (MCAR) is statistically examinable via a test for lack of distributional differences between groups with observed and missing data, and related consequences. A discussion is initially provided, from a formal logic…