Science.gov

Sample records for missing covariate information

  1. Estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers.

    PubMed

    Li, Shanshan; Ning, Yang

    2015-09-01

    Covariate-specific time-dependent ROC curves are often used to evaluate the diagnostic accuracy of a biomarker with time-to-event outcomes, when certain covariates have an impact on the test accuracy. In many medical studies, measurements of biomarkers are subject to missingness due to high cost or limitation of technology. This article considers estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers. To incorporate the covariate effect, we assume a proportional hazards model for the failure time given the biomarker and the covariates, and a semiparametric location model for the biomarker given the covariates. In the presence of missing biomarkers, we propose a simple weighted estimator for the ROC curves where the weights are inversely proportional to the selection probability. We also propose an augmented weighted estimator which utilizes information from the subjects with missing biomarkers. The augmented weighted estimator enjoys the double-robustness property in the sense that the estimator remains consistent if either the missing data process or the conditional distribution of the missing data given the observed data is correctly specified. We derive the large sample properties of the proposed estimators and evaluate their finite sample performance using numerical studies. The proposed approaches are illustrated using the US Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. PMID:25891918

  2. A Semiparametric Missing-Data-Induced Intensity Method for Missing Covariate Data in Individually Matched Case–Control Studies

    PubMed Central

    Gebregziabher, Mulugeta; Langholz, Bryan

    2010-01-01

    Summary In individually matched case–control studies, when some covariates are incomplete, an analysis based on the complete data may result in a large loss of information both in the missing and completely observed variables. This usually results in a bias and loss of efficiency. In this article, we propose a new method for handling the problem of missing covariate data based on a missing-data-induced intensity approach when the missingness mechanism does not depend on case–control status and show that this leads to a generalization of the missing indicator method. We derive the asymptotic properties of the estimates from the proposed method and, using an extensive simulation study, assess the finite sample performance in terms of bias, efficiency, and 95% confidence coverage under several missing data scenarios. We also make comparisons with complete-case analysis (CCA) and some missing data methods that have been proposed previously. Our results indicate that, under the assumption of predictable missingness, the suggested method provides valid estimation of parameters, is more efficient than CCA, and is competitive with other, more complex methods of analysis. A case–control study of multiple myeloma risk and a polymorphism in the receptor Inter-Leukin-6 (IL-6-α) is used to illustrate our findings. PMID:19751251

  3. Testing for associations with missing high-dimensional categorical covariates.

    PubMed

    Schumi, Jennifer; DiRienzo, A Gregory; DeGruttola, Victor

    2008-01-01

    Understanding how long-term clinical outcomes relate to short-term response to therapy is an important topic of research with a variety of applications. In HIV, early measures of viral RNA levels are known to be a strong prognostic indicator of future viral load response. However, mutations observed in the high-dimensional viral genotype at an early time point may change this prognosis. Unfortunately, some subjects may not have a viral genetic sequence measured at the early time point, and the sequence may be missing for reasons related to the outcome. Complete-case analyses of missing data are generally biased when the assumption that data are missing completely at random is not met, and methods incorporating multiple imputation may not be well-suited for the analysis of high-dimensional data. We propose a semiparametric multiple testing approach to the problem of identifying associations between potentially missing high-dimensional covariates and response. Following the recent exposition by Tsiatis, unbiased nonparametric summary statistics are constructed by inversely weighting the complete cases according to the conditional probability of being observed, given data that is observed for each subject. Resulting summary statistics will be unbiased under the assumption of missing at random. We illustrate our approach through an application to data from a recent AIDS clinical trial, and demonstrate finite sample properties with simulations. PMID:20231909

  4. Diagnostic Measures for Generalized Linear Models with Missing Covariates

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; SHI, XIAOYAN

    2009-01-01

    In this paper, we carry out an in-depth investigation of diagnostic measures for assessing the influence of observations and model misspecification in the presence of missing covariate data for generalized linear models. Our diagnostic measures include case-deletion measures and conditional residuals. We use the conditional residuals to construct goodness-of-fit statistics for testing possible misspecifications in model assumptions, including the sampling distribution. We develop specific strategies for incorporating missing data into goodness-of-fit statistics in order to increase the power of detecting model misspecification. A resampling method is proposed to approximate the p-value of the goodness-of-fit statistics. Simulation studies are conducted to evaluate our methods and a real data set is analysed to illustrate the use of our various diagnostic measures. PMID:20037674

  5. A NONPARAMETRIC MULTIPLE IMPUTATION APPROACH FOR DATA WITH MISSING COVARIATE VALUES WITH APPLICATION TO COLORECTAL ADENOMA DATA

    PubMed Central

    Hsu, Chiu-Hsieh; Long, Qi; Li, Yisheng; Jacobs, Elizabeth

    2015-01-01

    A nearest neighbor-based multiple imputation approach is proposed to recover missing covariate information using the predictive covariates while estimating the association between the outcome and the covariates. To conduct the imputation, two working models are fitted to define an imputing set. This approach is expected to be robust to the underlying distribution of the data. We show in simulation and demonstrate on a colorectal data set that the proposed approach can improve efficiency and reduce bias in a situation with missing at random compared to the complete case analysis and the modified inverse probability weighted method. PMID:24697618

  6. Comparison of Two Approaches for Handling Missing Covariates in Logistic Regression

    ERIC Educational Resources Information Center

    Peng, Chao-Ying Joanne; Zhu, Jin

    2008-01-01

    For the past 25 years, methodological advances have been made in missing data treatment. Most published work has focused on missing data in dependent variables under various conditions. The present study seeks to fill the void by comparing two approaches for handling missing data in categorical covariates in logistic regression: the…

  7. Music Information Services System (MISS).

    ERIC Educational Resources Information Center

    Rao, Paladugu V.

    Music Information Services System (MISS) was developed at the Eastern Illinois University Library to manage the sound recording collection. Operating in a batch mode, MISS keeps track of the inventory of sound recordings, generates necessary catalogs to facilitate the use of the sound recordings, and provides specialized bibliographies of sound…

  8. A New Approach to Handle Missing Covariate Data in Twin Research : With an Application to Educational Achievement Data.

    PubMed

    Schwabe, Inga; Boomsma, Dorret I; Zeeuw, Eveline L de; Berg, Stéphanie M van den

    2016-07-01

    The often-used ACE model which decomposes phenotypic variance into additive genetic (A), common-environmental (C) and unique-environmental (E) parts can be extended to include covariates. Collection of these variables however often leads to a large amount of missing data, for example when self-reports (e.g. questionnaires) are not fully completed. The usual approach to handle missing covariate data in twin research results in reduced power to detect statistical effects, as only phenotypic and covariate data of individual twins with complete data can be used. Here we present a full information approach to handle missing covariate data that makes it possible to use all available data. A simulation study shows that, independent of missingness scenario, number of covariates or amount of missingness, the full information approach is more powerful than the usual approach. To illustrate the new method, we applied it to test scores on a Dutch national school achievement test (Eindtoets Basisonderwijs) in the final grade of primary school of 990 twin pairs. The effects of school-aggregated measures (e.g. school denomination, pedagogical philosophy, school size) and the effect of the sex of a twin on these test scores were tested. None of the covariates had a significant effect on individual differences in test scores.

  9. Merging multiple longitudinal studies with study-specific missing covariates: A joint estimating function approach.

    PubMed

    Wang, Fei; Song, Peter X-K; Wang, Lu

    2015-12-01

    Merging multiple datasets collected from studies with identical or similar scientific objectives is often undertaken in practice to increase statistical power. This article concerns the development of an effective statistical method that enables to merge multiple longitudinal datasets subject to various heterogeneous characteristics, such as different follow-up schedules and study-specific missing covariates (e.g., covariates observed in some studies but missing in other studies). The presence of study-specific missing covariates presents great statistical methodology challenge in data merging and analysis. We propose a joint estimating function approach to addressing this challenge, in which a novel nonparametric estimating function constructed via splines-based sieve approximation is utilized to bridge estimating equations from studies with missing covariates to those with fully observed covariates. Under mild regularity conditions, we show that the proposed estimator is consistent and asymptotically normal. We evaluate finite-sample performances of the proposed method through simulation studies. In comparison to the conventional multiple imputation approach, our method exhibits smaller estimation bias. We provide an illustrative data analysis using longitudinal cohorts collected in Mexico City to assess the effect of lead exposures on children's somatic growth.

  10. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…

  11. Cox regression with missing covariate data using a modified partial likelihood method.

    PubMed

    Martinussen, Torben; Holst, Klaus K; Scheike, Thomas H

    2016-10-01

    Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance-covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.

  12. Cox regression with missing covariate data using a modified partial likelihood method.

    PubMed

    Martinussen, Torben; Holst, Klaus K; Scheike, Thomas H

    2016-10-01

    Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance-covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example. PMID:26493471

  13. Imputation of missing covariate values in epigenome-wide analysis of DNA methylation data

    PubMed Central

    Wu, Chong; Demerath, Ellen W.; Pankow, James S.; Bressler, Jan; Fornage, Myriam; Grove, Megan L.; Chen, Wei; Guan, Weihua

    2016-01-01

    ABSTRACT DNA methylation is a widely studied epigenetic mechanism and alterations in methylation patterns may be involved in the development of common diseases. Unlike inherited changes in genetic sequence, variation in site-specific methylation varies by tissue, developmental stage, and disease status, and may be impacted by aging and exposure to environmental factors, such as diet or smoking. These non-genetic factors are typically included in epigenome-wide association studies (EWAS) because they may be confounding factors to the association between methylation and disease. However, missing values in these variables can lead to reduced sample size and decrease the statistical power of EWAS. We propose a site selection and multiple imputation (MI) method to impute missing covariate values and to perform association tests in EWAS. Then, we compare this method to an alternative projection-based method. Through simulations, we show that the MI-based method is slightly conservative, but provides consistent estimates for effect size. We also illustrate these methods with data from the Atherosclerosis Risk in Communities (ARIC) study to carry out an EWAS between methylation levels and smoking status, in which missing cell type compositions and white blood cell counts are imputed. PMID:26890800

  14. Addressing missing covariates for the regression analysis of competing risks: Prognostic modelling for triaging patients diagnosed with prostate cancer.

    PubMed

    Escarela, Gabriel; Ruiz-de-Chavez, Juan; Castillo-Morales, Alberto

    2016-08-01

    Competing risks arise in medical research when subjects are exposed to various types or causes of death. Data from large cohort studies usually exhibit subsets of regressors that are missing for some study subjects. Furthermore, such studies often give rise to censored data. In this article, a carefully formulated likelihood-based technique for the regression analysis of right-censored competing risks data when two of the covariates are discrete and partially missing is developed. The approach envisaged here comprises two models: one describes the covariate effects on both long-term incidence and conditional latencies for each cause of death, whilst the other deals with the observation process by which the covariates are missing. The former is formulated with a well-established mixture model and the latter is characterised by copula-based bivariate probability functions for both the missing covariates and the missing data mechanism. The resulting formulation lends itself to the empirical assessment of non-ignorability by performing sensitivity analyses using models with and without a non-ignorable component. The methods are illustrated on a 20-year follow-up involving a prostate cancer cohort from the National Cancer Institutes Surveillance, Epidemiology, and End Results program.

  15. MISS: A Metamodel of Information System Service

    NASA Astrophysics Data System (ADS)

    Arni-Bloch, Nicolas; Ralyté, Jolita

    Integration of different components that compose enterprise information systems (IS) represents a big challenge in the IS development. However, this integration is indispensable in order to avoid IS fragmentation and redundancy between different IS applications. In this work we apply service-oriented development principles to information systems. We define the concept of information system service (ISS) and propose a metamodel of ISS (MISS). We claim that it is not sufficient to consider an ISS as a black box and it is essential to include in the ISS specification the information about service structure, processes and rules shared with other services and thus to make the service transparent. Therefore we define the MISS using three informational spaces (static, dynamic and rule).

  16. Bayesian semiparametric nonlinear mixed-effects joint models for data with skewness, missing responses, and measurement errors in covariates.

    PubMed

    Huang, Yangxin; Dagne, Getachew

    2012-09-01

    It is a common practice to analyze complex longitudinal data using semiparametric nonlinear mixed-effects (SNLME) models with a normal distribution. Normality assumption of model errors may unrealistically obscure important features of subject variations. To partially explain between- and within-subject variations, covariates are usually introduced in such models, but some covariates may often be measured with substantial errors. Moreover, the responses may be missing and the missingness may be nonignorable. Inferential procedures can be complicated dramatically when data with skewness, missing values, and measurement error are observed. In the literature, there has been considerable interest in accommodating either skewness, incompleteness or covariate measurement error in such models, but there has been relatively little study concerning all three features simultaneously. In this article, our objective is to address the simultaneous impact of skewness, missingness, and covariate measurement error by jointly modeling the response and covariate processes based on a flexible Bayesian SNLME model. The method is illustrated using a real AIDS data set to compare potential models with various scenarios and different distribution specifications.

  17. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 3 2011-04-01 2011-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  18. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  19. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 3 2012-04-01 2012-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  20. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  1. 19 CFR 201.3a - Missing children information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Missing children information. 201.3a Section 201... Miscellaneous § 201.3a Missing children information. (a) Pursuant to 39 U.S.C. 3220, penalty mail sent by the Commission may be used to assist in the location and recovery of missing children. This section...

  2. Jointly Modeling Event Time and Skewed-Longitudinal Data with Missing Response and Mismeasured Covariate for AIDS Studies.

    PubMed

    Huang, Yangxin; Yan, Chunning; Xing, Dongyuan; Zhang, Nanhua; Chen, Henian

    2015-01-01

    In longitudinal studies it is often of interest to investigate how a repeatedly measured marker in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time-to-event data. Normality of model errors in longitudinal model is a routine assumption, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain between- and within-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. Moreover, the responses may encounter nonignorable missing. Statistical analysis may be complicated dramatically based on longitudinal-survival joint models where longitudinal data with skewness, missing values, and measurement errors are observed. In this article, we relax the distributional assumptions for the longitudinal models using skewed (parametric) distribution and unspecified (nonparametric) distribution placed by a Dirichlet process prior, and address the simultaneous influence of skewness, missingness, covariate measurement error, and time-to-event process by jointly modeling three components (response process with missing values, covariate process with measurement errors, and time-to-event process) linked through the random-effects that characterize the underlying individual-specific longitudinal processes in Bayesian analysis. The method is illustrated with an AIDS study by jointly modeling HIV/CD4 dynamics and time to viral rebound in comparison with potential models with various scenarios and different distributional specifications. PMID:24905593

  3. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249

  4. Adaptive Mechanisms for Treating Missing Information: A Simulation Study

    ERIC Educational Resources Information Center

    Garcia-Retamero, Rocio; Rieskamp, Jorg

    2008-01-01

    People often make inferences with incomplete information. Previous research has led to a mixed picture of how people treat missing information. To explain these results, the authors follow the Brunswikian perspective on human inference and hypothesize that the mechanism's accuracy for treating missing information depends on how it is distributed…

  5. Do people treat missing information adaptively when making inferences?

    PubMed

    Garcia-Retamero, Rocio; Rieskamp, Jörg

    2009-10-01

    When making inferences, people are often confronted with situations with incomplete information. Previous research has led to a mixed picture about how people react to missing information. Options include ignoring missing information, treating it as either positive or negative, using the average of past observations for replacement, or using the most frequent observation of the available information as a placeholder. The accuracy of these inference mechanisms depends on characteristics of the environment. When missing information is uniformly distributed, it is most accurate to treat it as the average, whereas when it is negatively correlated with the criterion to be judged, treating missing information as if it were negative is most accurate. Whether people treat missing information adaptively according to the environment was tested in two studies. The results show that participants were sensitive to how missing information was distributed in an environment and most frequently selected the mechanism that was most adaptive. From these results the authors conclude that reacting to missing information in different ways is an adaptive response to environmental characteristics.

  6. MISSE in the Materials and Processes Technical Information System (MAPTIS )

    NASA Technical Reports Server (NTRS)

    Burns, DeWitt; Finckenor, Miria; Henrie, Ben

    2013-01-01

    Materials International Space Station Experiment (MISSE) data is now being collected and distributed through the Materials and Processes Technical Information System (MAPTIS) at Marshall Space Flight Center in Huntsville, Alabama. MISSE data has been instrumental in many programs and continues to be an important source of data for the space community. To facilitate great access to the MISSE data the International Space Station (ISS) program office and MAPTIS are working to gather this data into a central location. The MISSE database contains information about materials, samples, and flights along with pictures, pdfs, excel files, word documents, and other files types. Major capabilities of the system are: access control, browsing, searching, reports, and record comparison. The search capabilities will search within any searchable files so even if the desired meta-data has not been associated data can still be retrieved. Other functionality will continue to be added to the MISSE database as the Athena Platform is expanded

  7. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  8. On Added Information for ML Factor Analysis with Mean and Covariance Structures.

    ERIC Educational Resources Information Center

    Yung, Yiu-Fai; Bentler, Peter M.

    1999-01-01

    Using explicit formulas for the information matrix of maximum likelihood factor analysis under multivariate normal theory, gross and net information for estimating the parameters in a covariance structure gained by adding the associated mean structure are defined. (Author/SLD)

  9. Estimating Missing Features to Improve Multimedia Information Retrieval

    SciTech Connect

    Bagherjeiran, A; Love, N S; Kamath, C

    2006-09-28

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.

  10. Electronic pharmacopoeia: a missed opportunity for safe opioid prescribing information?

    PubMed

    Lapoint, Jeff; Perrone, Jeanmarie; Nelson, Lewis S

    2014-03-01

    Errors in prescribing of dangerous medications, such as extended release or long acting (ER/LA) opioid forlmulations, remain an important cause of patient harm. Prescribing errors often relate to the failure to note warnings regarding contraindications and drug interactions. Many prescribers utilize electronic pharmacopoeia (EP) to improve medication ordering. The purpose of this study is to assess the ability of commonly used apps to provide accurate safety information about the boxed warning for ER/LA opioids. We evaluated a convenience sample of six popular EP apps available for the iPhone and an online reference for the presence of relevant safety warnings. We accessed the dosing information for each of six ER/LA medications and assessed for the presence of an easily identifiable indication that a boxed warning was present, even if the warning itself was not provided. The prominence of precautionary drug information presented to the user was assessed for each app. Provided information was classified based on the presence of the warning in the ordering pathway, located separately but within the prescribers view, or available in a separate screen of the drug information but non-highlighted. Each program provided a consistent level of warning information for each of the six ER/LA medications. Only 2/7 programs placed a warning in line with dosing information (level 1); 3/7 programs offered level 2 warning and 1/7 offered level 3 warning. One program made no mention of a boxed warning. Most EP apps isolate important safety warnings, and this represents a missed opportunity to improve prescribing practices. PMID:24081616

  11. Electronic pharmacopoeia: a missed opportunity for safe opioid prescribing information?

    PubMed

    Lapoint, Jeff; Perrone, Jeanmarie; Nelson, Lewis S

    2014-03-01

    Errors in prescribing of dangerous medications, such as extended release or long acting (ER/LA) opioid forlmulations, remain an important cause of patient harm. Prescribing errors often relate to the failure to note warnings regarding contraindications and drug interactions. Many prescribers utilize electronic pharmacopoeia (EP) to improve medication ordering. The purpose of this study is to assess the ability of commonly used apps to provide accurate safety information about the boxed warning for ER/LA opioids. We evaluated a convenience sample of six popular EP apps available for the iPhone and an online reference for the presence of relevant safety warnings. We accessed the dosing information for each of six ER/LA medications and assessed for the presence of an easily identifiable indication that a boxed warning was present, even if the warning itself was not provided. The prominence of precautionary drug information presented to the user was assessed for each app. Provided information was classified based on the presence of the warning in the ordering pathway, located separately but within the prescribers view, or available in a separate screen of the drug information but non-highlighted. Each program provided a consistent level of warning information for each of the six ER/LA medications. Only 2/7 programs placed a warning in line with dosing information (level 1); 3/7 programs offered level 2 warning and 1/7 offered level 3 warning. One program made no mention of a boxed warning. Most EP apps isolate important safety warnings, and this represents a missed opportunity to improve prescribing practices.

  12. Informed Conditioning on Clinical Covariates Increases Power in Case-Control Association Studies

    PubMed Central

    Zaitlen, Noah; Lindström, Sara; Pasaniuc, Bogdan; Cornelis, Marilyn; Genovese, Giulio; Pollack, Samuela; Barton, Anne; Bickeböller, Heike; Bowden, Donald W.; Eyre, Steve; Freedman, Barry I.; Friedman, David J.; Field, John K.; Groop, Leif; Haugen, Aage; Heinrich, Joachim; Henderson, Brian E.; Hicks, Pamela J.; Hocking, Lynne J.; Kolonel, Laurence N.; Landi, Maria Teresa; Langefeld, Carl D.; Le Marchand, Loic; Meister, Michael; Morgan, Ann W.; Raji, Olaide Y.; Risch, Angela; Rosenberger, Albert; Scherf, David; Steer, Sophia; Walshaw, Martin; Waters, Kevin M.; Wilson, Anthony G.; Wordsworth, Paul; Zienolddiny, Shanbeh; Tchetgen, Eric Tchetgen; Haiman, Christopher; Hunter, David J.; Plenge, Robert M.; Worthington, Jane; Christiani, David C.; Schaumberg, Debra A.; Chasman, Daniel I.; Altshuler, David; Voight, Benjamin; Kraft, Peter; Patterson, Nick; Price, Alkes L.

    2012-01-01

    Genetic case-control association studies often include data on clinical covariates, such as body mass index (BMI), smoking status, or age, that may modify the underlying genetic risk of case or control samples. For example, in type 2 diabetes, odds ratios for established variants estimated from low–BMI cases are larger than those estimated from high–BMI cases. An unanswered question is how to use this information to maximize statistical power in case-control studies that ascertain individuals on the basis of phenotype (case-control ascertainment) or phenotype and clinical covariates (case-control-covariate ascertainment). While current approaches improve power in studies with random ascertainment, they often lose power under case-control ascertainment and fail to capture available power increases under case-control-covariate ascertainment. We show that an informed conditioning approach, based on the liability threshold model with parameters informed by external epidemiological information, fully accounts for disease prevalence and non-random ascertainment of phenotype as well as covariates and provides a substantial increase in power while maintaining a properly controlled false-positive rate. Our method outperforms standard case-control association tests with or without covariates, tests of gene x covariate interaction, and previously proposed tests for dealing with covariates in ascertained data, with especially large improvements in the case of case-control-covariate ascertainment. We investigate empirical case-control studies of type 2 diabetes, prostate cancer, lung cancer, breast cancer, rheumatoid arthritis, age-related macular degeneration, and end-stage kidney disease over a total of 89,726 samples. In these datasets, informed conditioning outperforms logistic regression for 115 of the 157 known associated variants investigated (P-value = 1×10−9). The improvement varied across diseases with a 16% median increase in χ2 test statistics and a

  13. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  14. Operation reliability assessment for cutting tools by applying a proportional covariate model to condition monitoring information.

    PubMed

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-09-25

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.

  15. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  16. Statistical inference for Hardy-Weinberg proportions in the presence of missing genotype information.

    PubMed

    Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor

    2013-01-01

    In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.

  17. 78 FR 55123 - Submission for Review: We Need Information About Your Missing Payment, RI 38-31

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... MANAGEMENT Submission for Review: We Need Information About Your Missing Payment, RI 38-31 AGENCY: U.S... (ICR) 3206-0187, We Need Information About Your Missing Payment, RI 38-31. As required by the Paperwork... Services, Office of Personnel Management. Title: We Need Information About Your Missing Payment. OMB:...

  18. Missing Children. Missing Children Data Collected by the National Crime Information Center. Fact Sheet for the Honorable Alfonse M. D'Amato, United States Senate.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    This document is a fact sheet on missing children data collected by the National Crime Information Center (NCIC). The document contains details on the design of the NCIC system, state control for management and use of the system, access to the system, criteria for missing persons, number of active cases, and the unidentified persons file of the…

  19. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  20. Information Literacy: The Missing Link in Early Childhood Education

    ERIC Educational Resources Information Center

    Heider, Kelly L.

    2009-01-01

    The rapid growth of information over the last 30 or 40 years has made it impossible for educators to prepare students for the future without teaching them how to be effective information managers. The American Library Association refers to those students who manage information effectively as "information literate." Information literacy instruction…

  1. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  2. Open Informational Ecosystems: The Missing Link for Sharing Educational Resources

    ERIC Educational Resources Information Center

    Kerres, Michael; Heinen, Richard

    2015-01-01

    Open educational resources are not available "as such". Their provision relies on a technological infrastructure of related services that can be described as an informational ecosystem. A closed informational ecosystem keeps educational resources within its boundary. An open informational ecosystem relies on the concurrence of…

  3. Covariances of evaluated nuclear data based upon uncertainty information of experimental data and nuclear models

    SciTech Connect

    Poenitz, W.P.; Peelle, R.W.

    1986-11-17

    A straightforward derivation is presented for the covariance matrix of evaluated cross sections based on the covariance matrix of the experimental data and propagation through nuclear model parameters. 10 refs.

  4. Modeling Achievement Trajectories when Attrition Is Informative

    ERIC Educational Resources Information Center

    Feldman, Betsy J.; Rabe-Hesketh, Sophia

    2012-01-01

    In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…

  5. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  6. Relying on Your Own Best Judgment: Imputing Values to Missing Information in Decision Making.

    ERIC Educational Resources Information Center

    Johnson, Richard D.; And Others

    Processes involved in making estimates of the value of missing information that could help in a decision making process were studied. Hypothetical purchases of ground beef were selected for the study as such purchases have the desirable property of quantifying both the price and quality. A total of 150 students at the University of Iowa rated the…

  7. Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David

    2013-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…

  8. Individual Information-Centered Approach for Handling Physical Activity Missing Data

    ERIC Educational Resources Information Center

    Kang, Minsoo; Rowe, David A.; Barreira, Tiago V.; Robinson, Terrance S.; Mahar, Matthew T.

    2009-01-01

    The purpose of this study was to validate individual information (II)-centered methods for handling missing data, using data samples of 118 middle-aged adults and 91 older adults equipped with Yamax SW-200 pedometers and Actigraph accelerometers for 7 days. We used a semisimulation approach to create six data sets: three physical activity outcome…

  9. Restoration of the missing pixel information caused by contrails in multispectral remotely sensed imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Daxiang; Zhang, Chuanrong; Li, Weidong; Cromley, Robert; Hanink, Dean; Civco, Daniel; Travis, David

    2014-01-01

    Although removing the pixels covered by contrails and their shadows and restoring the missing information at the locations in remotely sensed imagery are important to understand contrails' effects on climate change, there are no such studies in the current literature. This study investigates the restoration of the missing information of the pixels caused by contrails in multispectral remotely sensed Landsat 5 TM imagery using a cokriging approach. Interpolation results and several validation methods show that it is practical to use the cokriging approach to restore the contrail-covered pixels in the multispectral remotely sensed imagery. Compared to ordinary kriging, the results are improved by taking advantage of both the spatial information in the original imagery and information from the secondary imagery.

  10. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  11. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  12. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  13. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  14. 20 CFR 364.3 - Publication of missing children information in the Railroad Retirement Board's in-house...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... in the Railroad Retirement Board's in-house publications. 364.3 Section 364.3 Employees' Benefits... the Railroad Retirement Board's in-house publications. (a) All-A-Board. Information about missing... publication. (b) Other in-house publications. The Board may publish missing children information in other...

  15. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  16. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  17. Miss Heroin.

    ERIC Educational Resources Information Center

    Riley, Bernice

    This script, with music, lyrics and dialog, was written especially for youngsters to inform them of the potential dangers of various drugs. The author, who teaches in an elementary school in Harlem, New York, offers Miss Heroin as her answer to the expressed opinion that most drug and alcohol information available is either too simplified and…

  18. The impact of covariance information on criticality safety calculations in the resolved resonance energy range.

    SciTech Connect

    Naberejnev, D. G.; Palmiotti, G.; Yang, W. S.

    2004-06-11

    Resonance data play a significant role in the calculations of systems considered for criticality safety applications. K{sub eff}, the major parameter of interest in such a type of calculations, can be heavily dependent both on the quality of the resonance data as well as on the accuracy achieved in the processing of these data. If reasonable uncertainty values are available, in conjunction with their correlation in energy and among type of resonance parameters, one can exploit existing methodologies, based on perturbation theory, in order to evaluate their impact on the integral parameter of interest, i.e., K{sub eff} in our case, in practical applications. In this way, one could be able to judge if the uncertainty on specific quantities, e.g., covariances on resonance data, have a significant impact and, therefore, deserve a careful evaluation. This report, first, will recall the basic principles that lie behind an uncertainty evaluation and review the current situation in the field of covariance data. Then an attempt is made for defining a methodology that allows calculating covariances values for resolved resonance parameters. Finally, practical applications, of interest for criticality safety calculations, illustrate the impact of different assumptions on correlations among resolved resonance parameters.

  19. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    PubMed

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion. PMID:25345922

  20. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    PubMed

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.

  1. Reconstructing missing information on precipitation datasets: impact of tails on adopted statistical distributions.

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Beckie, Roger Daniel

    2014-05-01

    Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were

  2. Missed bleeding events after ticagrelor in PEGASUS trial: Massive non-compliance, information censoring, or both?

    PubMed

    Serebruany, Victor; Tomek, Ales

    2016-07-15

    PEGASUS trial reported reduction of composite primary endpoint after conventional 180mg/daily ticagrelor (CT), and lower 120mg/daily dose ticagrelor (LT) at expense of extra bleeding. Following approval of CT and LT for long-term secondary prevention indication, recent FDA review verified some bleeding outcomes in PEGASUS. To compare the risks after CT and LT against placebo by seven TIMI scale variables, and 9 bleeding categories considered as serious adverse events (SAE) in light of PEGASUS drug discontinuation rates (DDR). The DDR in all PEGASUS arms was high reaching astronomical 32% for CT. The distribution of some outcomes (TIMI major, trauma, epistaxis, iron deficiency, hemoptysis, and anemia) was reasonable. However, the TIMI minor events were heavily underreported when compared to similar trials. Other bleedings (intracranial, spontaneous, hematuria, and gastrointestinal) appear sporadic, lacking expected dose-dependent impact of CT and LT. Few SAE outcomes (fatal, ecchymosis, hematoma, bruises, bleeding) paradoxically reported more bleeding after LT than after CT. Many bleeding outcomes were probably missed in PEGASUS potentially due to massive non-compliance, information censoring, or both. The FDA must improve reporting of trial outcomes especially in the sponsor-controlled environment when DDR and incomplete follow-up rates are high. PMID:27128533

  3. Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed.

    PubMed

    Ehinger, Krista A; Allen, Kala; Wolfe, Jeremy M

    2016-05-01

    Previous work has shown that human observers discount or neglect cast shadows in natural and artificial scenes across a range of visual tasks. This is a reasonable strategy for a visual system designed to recognize objects under a range of lighting conditions, since cast shadows are not intrinsic properties of the scene-they look different (or disappear entirely) under different lighting conditions. However, cast shadows can convey useful information about the three-dimensional shapes of objects and their spatial relations. In this study, we investigated how well people detect changes to cast shadows, presented in natural scenes in a change blindness paradigm, and whether shadow changes that imply the movement or disappearance of an object are more easily noticed than shadow changes that imply a change in lighting. In Experiment 1, a critical object's shadow was removed, rotated to another direction, or shifted down to suggest that the object was floating. All of these shadow changes were noticed less often than changes to physical objects or surfaces in the scene, and there was no difference in the detection rates for the three types of changes. In Experiment 2, the shadows of visible or occluded objects were removed from the scenes. Although removing the cast shadow of an occluded object could be seen as an object deletion, both types of shadow changes were noticed less often than deletions of the visible, physical objects in the scene. These results show that even informative shadow changes are missed, suggesting that cast shadows are discounted fairly early in the processing of natural scenes. PMID:26846753

  4. Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed.

    PubMed

    Ehinger, Krista A; Allen, Kala; Wolfe, Jeremy M

    2016-05-01

    Previous work has shown that human observers discount or neglect cast shadows in natural and artificial scenes across a range of visual tasks. This is a reasonable strategy for a visual system designed to recognize objects under a range of lighting conditions, since cast shadows are not intrinsic properties of the scene-they look different (or disappear entirely) under different lighting conditions. However, cast shadows can convey useful information about the three-dimensional shapes of objects and their spatial relations. In this study, we investigated how well people detect changes to cast shadows, presented in natural scenes in a change blindness paradigm, and whether shadow changes that imply the movement or disappearance of an object are more easily noticed than shadow changes that imply a change in lighting. In Experiment 1, a critical object's shadow was removed, rotated to another direction, or shifted down to suggest that the object was floating. All of these shadow changes were noticed less often than changes to physical objects or surfaces in the scene, and there was no difference in the detection rates for the three types of changes. In Experiment 2, the shadows of visible or occluded objects were removed from the scenes. Although removing the cast shadow of an occluded object could be seen as an object deletion, both types of shadow changes were noticed less often than deletions of the visible, physical objects in the scene. These results show that even informative shadow changes are missed, suggesting that cast shadows are discounted fairly early in the processing of natural scenes.

  5. Missing data exploration: highlighting graphical presentation of missing pattern.

    PubMed

    Zhang, Zhongheng

    2015-12-01

    Functions shipped with R base can fulfill many tasks of missing data handling. However, because the data volume of electronic medical record (EMR) system is always very large, more sophisticated methods may be helpful in data management. The article focuses on missing data handling by using advanced techniques. There are three types of missing data, that is, missing completely at random (MCAR), missing at random (MAR) and not missing at random (NMAR). This classification system depends on how missing values are generated. Two packages, Multivariate Imputation by Chained Equations (MICE) and Visualization and Imputation of Missing Values (VIM), provide sophisticated functions to explore missing data pattern. In particular, the VIM package is especially helpful in visual inspection of missing data. Finally, correlation analysis provides information on the dependence of missing data on other variables. Such information is useful in subsequent imputations.

  6. Predicting New Hampshire Indoor Radon Concentrations from geologic information and other covariates

    SciTech Connect

    Apte, M.G.; Price, P.N.; Nero, A.V.; Revzan, K.L.

    1998-05-01

    Generalized geologic province information and data on house construction were used to predict indoor radon concentrations in New Hampshire (NH). A mixed-effects regression model was used to predict the geometric mean (GM) short-term radon concentrations in 259 NH towns. Bayesian methods were used to avoid over-fitting and to minimize the effects of small sample variation within towns. Data from a random survey of short-term radon measurements, individual residence building characteristics, along with geologic unit information, and average surface radium concentration by town, were variables used in the model. Predicted town GM short-term indoor radon concentrations for detached houses with usable basements range from 34 Bq/m{sup 3} (1 pCi/l) to 558 Bq/m{sup 3} (15 pCi/l), with uncertainties of about 30%. A geologic province consisting of glacial deposits and marine sediments, was associated with significantly elevated radon levels, after adjustment for radium concentration, and building type. Validation and interpretation of results are discussed.

  7. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  8. Accounting for Informatively Missing Data in Logistic Regression by Means of Reassessment Sampling

    PubMed Central

    Lin, Ji; Lyles, Robert H.

    2015-01-01

    We explore the “reassessment” design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection, and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. PMID:25707010

  9. An Upper Bound on High Speed Satellite Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.

  10. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  11. Predicting top-L missing links with node and link clustering information in large-scale networks

    NASA Astrophysics Data System (ADS)

    Wu, Zhihao; Lin, Youfang; Wan, Huaiyu; Jamil, Waleed

    2016-08-01

    Networks are mathematical structures that are universally used to describe a large variety of complex systems, such as social, biological, and technological systems. The prediction of missing links in incomplete complex networks aims to estimate the likelihood of the existence of a link between a pair of nodes. Various topological features of networks have been applied to develop link prediction methods. However, the exploration of features of links is still limited. In this paper, we demonstrate the power of node and link clustering information in predicting top -L missing links. In the existing literature, link prediction algorithms have only been tested on small-scale and middle-scale networks. The network scale factor has not attracted the same level of attention. In our experiments, we test the proposed method on three groups of networks. For small-scale networks, since the structures are not very complex, advanced methods cannot perform significantly better than classical methods. For middle-scale networks, the proposed index, combining both node and link clustering information, starts to demonstrate its advantages. In many networks, combining both node and link clustering information can improve the link prediction accuracy a great deal. Large-scale networks with more than 100 000 links have rarely been tested previously. Our experiments on three large-scale networks show that local clustering information based methods outperform other methods, and link clustering information can further improve the accuracy of node clustering information based methods, in particular for networks with a broad distribution of the link clustering coefficient.

  12. Variance Decomposition of MRI-Based Covariance Maps Using Genetically-Informative Samples and Structural Equation Modeling

    PubMed Central

    Schmitt, J. Eric; Lenroot, Rhoshel; Ordaz, Sarah E.; Wallace, Gregory L.; Lerch, Jason P.; Evans, Alan C.; Prom, Elizabeth C.; Kendler, Kenneth S.; Neale, Michael C.; Giedd, Jay N.

    2010-01-01

    The role of genetics in driving intracortical relationships is an important question that has rarely been studied in humans. In particular, there are no extant high-resolution imaging studies on genetic covariance. In this article, we describe a novel method that combines classical quantitative genetic methodologies for variance decomposition with recently-developed semi-multivariate algorithms for high-resolution measurement of phenotypic covariance. Using these tools, we produced correlational maps of genetic and environmental (i.e. nongenetic) relationships between several regions of interest and the cortical surface in a large pediatric sample of 600 twins, siblings, and singletons. These analyses demonstrated high, fairly uniform, statistically significant genetic correlations between the entire cortex and global mean cortical thickness. In agreement with prior reports on phenotypic covariance using similar methods, we found mean cortical thickness was most strongly correlated with association cortices. However, the present study suggests that genetics plays a large role in global brain patterning of cortical thickness in this manner. Further, using specific gyri with known high heritabilities as seed regions, we found a consistent pattern of high bilateral genetic correlations between structural homologues, with environmental correlations more restricted to the same hemisphere as the seed region, suggesting that interhemispheric covariance is largely genetically mediated. These findings are consistent with the limited existing knowledge on the genetics of cortical variability as well as our prior multivariate studies on cortical gyri. PMID:18672072

  13. Singular Spectrum Analysis With Missing Data

    NASA Astrophysics Data System (ADS)

    Kondrashov, D.; Feliks, Y.; Ghil, M.

    2004-12-01

    A Singular Spectrum Analysis (SSA) with gaps of missing data is presented. SSA is a data-adaptive, non-parametric spectral method based on diagonalizing the lag-covariance matrix of a time series. Using leading oscillatory SSA modes, we iteratively produce estimates of missing data, which are then used to compute a self-consistent lag-covariance matrix. For a univariate record, SSA imputation utilizes only temporal correlations in the data to fill up missing points. For a multivariate record, multi-channel SSA imputation takes advantage of both spatial and temporal correlations. Analyzing the whole available record with the missing points filled, allows for greater accuracy and better significance testing in the spectral analysis. It also provides information on the evolution of the oscillatory modes in the gaps. We use cross-validation to optimize the SSA window width and number of SSA modes to fill the gaps. The algorithm is applied to the extended (A.D. 622--1922) historical records of the low- and high-water levels of the Nile River at Cairo. We fill in the large gaps in the later part of the records (A.D. 1471--1922), and identify statistically significant interannual and interdecadal periodicities. Our analysis suggests that the 7-year periodicity in the records, possibly related to the biblical "Joseph" cycle, is due to North-Atlantic influences. We find that the climate shifts at the beginning and the end of the Medieval Warm Period were fairly abrupt and affected several climatic modes of variability.

  14. Point of care experience with pneumococcal and influenza vaccine documentation among persons aged ≥65 years: high refusal rates and missing information.

    PubMed

    Brownfield, Elisha; Marsden, Justin E; Iverson, Patty J; Zhao, Yumin; Mauldin, Patrick D; Moran, William P

    2012-09-01

    Missed opportunities to vaccinate and refusal of vaccine by patients have hindered the achievement of national health care goals. The meaningful use of electronic medical records should improve vaccination rates, but few studies have examined the content of these records. In our vaccine intervention program using an electronic record with physician prompts, paper prompts, and nursing standing orders, we were unable to achieve national vaccine goals, due in large part to missing information and patient refusal.

  15. Working with Missing Values

    ERIC Educational Resources Information Center

    Acock, Alan C.

    2005-01-01

    Less than optimum strategies for missing values can produce biased estimates, distorted statistical power, and invalid conclusions. After reviewing traditional approaches (listwise, pairwise, and mean substitution), selected alternatives are covered including single imputation, multiple imputation, and full information maximum likelihood…

  16. Spatio-temporal rectification of tower-based eddy-covariance flux measurements for consistently informing process-based models

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Xu, K.; Desai, A. R.; Taylor, J. R.; Kljun, N.; Schneider, D.; Kampe, T. U.; Fox, A. M.

    2013-12-01

    Process-based models, such as land surface models (LSMs), allow insight in the spatio-temporal distribution of stocks and the exchange of nutrients, trace gases etc. among environmental compartments. More recently, LSMs also become capable of assimilating time-series of in-situ reference observations. This enables calibrating the underlying functional relationships to site-specific characteristics, or to constrain the model results after each time-step in an attempt to minimize drift. The spatial resolution of LSMs is typically on the order of 10^2-10^4 km2, which is suitable for linking regional to continental scales and beyond. However, continuous in-situ observations of relevant stock and exchange variables, such as tower-based eddy-covariance (EC) fluxes, represent orders of magnitude smaller spatial scales (10^-6-10^1 km2). During data assimilation, this significant gap in spatial representativeness is typically either neglected, or side-stepped using simple tiling approaches. Moreover, at ';coarse' resolutions, a single LSM evaluation per time-step implies linearity among the underlying functional relationships as well as among the sub-grid land cover fractions. This, however, is not warranted for land-atmosphere exchange processes over more complex terrain. Hence, it is desirable to explicitly consider spatial variability at LSM sub-grid scales. Here we present a procedure that determines from a single EC tower the spatially integrated probability density function (PDF) of the surface-atmosphere exchange for individual land covers. These PDFs allow quantifying the expected value, as well as spatial variability over a target domain, can be assimilated in tiling-capable LSMs, and mitigate linearity assumptions at ';coarse' resolutions. The procedure is based on the extraction and extrapolation of environmental response functions (ERFs), for which a technical-oriented companion poster is submitted. In short, the subsequent steps are: (i) Time

  17. Covariation neglect among novice investors.

    PubMed

    Hedesström, Ted Martin; Svedsäter, Henrik; Gärling, Tommy

    2006-09-01

    In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns of individual assets. In Experiment 3, nearly half of those who seemingly attempted to minimize risk diversified even when this increased risk. These results indicate that novice investors neglect covariation when diversifying across investment alternatives. Experiment 4 established that naive diversification follows from motivation to minimize risk and showed that covariation neglect was not significantly reduced by informing participants about how covariation affects portfolio risk but was reduced by making participants systematically calculate aggregate returns for diversified portfolios. In order to counteract naive diversification, novice investors need to be better informed about the rationale underlying recommendations to diversify.

  18. "Missing Milestones."

    ERIC Educational Resources Information Center

    Afzal, Nadeem A.; Martin, Diana L.; Atkinson, Patricia I.

    2001-01-01

    Examined the development of seven infants with "missing milestones" in motor development. Found that three children had normal development, three developed global developmental delay, and one was diagnosed with multiple cavernous haemangiomata in the brain. Suggested that missing milestones can be a benign variation of normal motor development or…

  19. Slide Presentations as Speech Suppressors: When and Why Learners Miss Oral Information

    ERIC Educational Resources Information Center

    Wecker, Christof

    2012-01-01

    The objective of this study was to test whether information presented on slides during presentations is retained at the expense of information presented only orally, and to investigate part of the conditions under which this effect occurs, and how it can be avoided. Such an effect could be expected and explained either as a kind of redundancy…

  20. The Impact of Information and Communication Technology on Education: The Missing Discourse between Three Different Paradigms

    ERIC Educational Resources Information Center

    Aviram, Aharon; Talmi, Deborah

    2005-01-01

    Using a new methodological tool, the authors analyzed a large number of texts on information and communication technology (ICT) and education, and identified three clusters of views that guide educationists "in the field" and in more academic contexts. The clusters reflect different fundamental assumptions on ICT and education. The authors argue…

  1. Missing Elements Revisited: Information Engineering for Managing Quality of Care for Patients with Diabetes

    PubMed Central

    Connor, Matthew J; Connor, Michael J

    2010-01-01

    Introduction Advances in information technology offer new avenues for assembling data about diet and care regimens of diabetes patients “in the field.” This creates a challenge for their doctors and the diabetes care community—how to organize and use new data to produce better long-term outcomes for diabetes patients. Methods iAbetics approaches the challenge as a quality management problem, drawing on total quality concepts, which in turn are grounded in application of the scientific method. We frame the diabetes patient's quality-of-care problem as an ongoing scientific investigation aimed at quantifying and predicting relationships between specific care-management actions and their outcomes for individual patients in their ordinary course of life. Results Framing diabetes quality-of-care management as a scientific investigation leads to a seven-step model termed “adaptive empirical iteration.” Adaptive empirical iteration is a deliberate process to perfect the patient's choices, decisions, and actions in routine situations that make up most day-to-day life and to systematically adapt across differences in individual patients and/or changes in their physiology, diet, or environment. The architecture incorporates care-protocol management and version control, structured formats for data collection using mobile smart phones, statistical analysis on secure Web sites, tools for comparing alternative protocols, choice architecture technology to improve patient decisions, and information sharing for doctor review. Conclusions Adaptive empirical iteration is a foundation for information architecture designed to systematically improve quality-of-care provided to diabetes patients who act as their own day-to-day care provider under supervision and with support from their doctor. The approach defines “must-have” capabilities for systems using new information technology to improve long-term outcomes for diabetes patients. PMID:20920451

  2. How Missing Information in Diagnosis Can Lead to Disparities in the Clinical Encounter

    PubMed Central

    Alegría, Margarita; Nakash, Ora; Lapatin, Sheri; Oddo, Vanessa; Gao, Shan; Lin, Julia; Normand, Sharon-Lise

    2009-01-01

    Previous studies have documented diagnostic bias and noted that its reduction could eliminate misdiagnosis and improve mental health service delivery. Few studies have investigated clinicians' methods of obtaining and using information during the initial clinical encounter. We describe a study examining contributions to clinician bias during diagnostic assessment of ethnic/racial minority patients. A total of 129 mental health intakes were videotaped, involving 47 mental health clinicians from 8 primarily safety-net clinics. Videos were coded by another clinician using an information checklist, blind to the diagnoses provided by the original clinician. We found high levels of concordance between clinicians for substance-related disorders, low levels for depressive disorders, and anxiety disorders except panic. Most clinicians rely on patients' mention of depression, anxiety, or substance use to identify disorders, without assessing specific criteria. With limited diagnostic information, clinicians can optimize the clinical intake time to establish rapport with patients. We found Latino ethnicity to be a modifying factor of the association between symptom reports and likelihood of a depression diagnosis. Differential discussion of symptom areas, depending on patient ethnicity, may lead to differential diagnosis and increased likelihood of diagnostic bias. PMID:18843234

  3. A Note on the Use of Missing Auxiliary Variables in Full Information Maximum Likelihood-Based Structural Equation Models

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2008-01-01

    Recent missing data studies have argued in favor of an "inclusive analytic strategy" that incorporates auxiliary variables into the estimation routine, and Graham (2003) outlined methods for incorporating auxiliary variables into structural equation analyses. In practice, the auxiliary variables often have missing values, so it is reasonable to…

  4. What's missing? Discussing stem cell translational research in educational information on stem cell "tourism".

    PubMed

    Master, Zubin; Zarzeczny, Amy; Rachul, Christen; Caulfield, Timothy

    2013-01-01

    Stem cell tourism is a growing industry in which patients pursue unproven stem cell therapies for a wide variety of illnesses and conditions. It is a challenging market to regulate due to a number of factors including its international, online, direct-to-consumer approach. Calls to provide education and information to patients, their families, physicians, and the general public about the risks associated with stem cell tourism are mounting. Initial studies examining the perceptions of patients who have pursued stem cell tourism indicate many are highly critical of the research and regulatory systems in their home countries and believe them to be stagnant and unresponsive to patient needs. We suggest that educational material should include an explanation of the translational research process, in addition to other aspects of stem cell tourism, as one means to help promote greater understanding and, ideally, curb patient demand for unproven stem cell interventions. The material provided must stress that strong scientific research is required in order for therapies to be safe and have a greater chance at being effective. Through an analysis of educational material on stem cell tourism and translational stem cell research from patient groups and scientific societies, we describe essential elements that should be conveyed in educational material provided to patients. Although we support the broad dissemination of educational material on stem cell translational research, we also acknowledge that education may simply not be enough to engender patient and public trust in domestic research and regulatory systems. However, promoting patient autonomy by providing good quality information to patients so they can make better informed decisions is valuable in itself, irrespective of whether it serves as an effective deterrent of stem cell tourism.

  5. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.

  6. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    PubMed

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both. PMID:27198876

  7. Missing Mechanism Information

    ERIC Educational Resources Information Center

    Tryon, Warren W.

    2009-01-01

    The first recommendation Kazdin made for advancing the psychotherapy research knowledge base, improving patient care, and reducing the gulf between research and practice was to study the mechanisms of therapeutic change. He noted, "The study of mechanisms of change has received the least attention even though understanding mechanisms may well be…

  8. Covariance evaluation work at LANL

    SciTech Connect

    Kawano, Toshihiko; Talou, Patrick; Young, Phillip; Hale, Gerald; Chadwick, M B; Little, R C

    2008-01-01

    Los Alamos evaluates covariances for nuclear data library, mainly for actinides above the resonance regions and light elements in the enUre energy range. We also develop techniques to evaluate the covariance data, like Bayesian and least-squares fitting methods, which are important to explore the uncertainty information on different types of physical quantities such as elastic scattering angular distribution, or prompt neutron fission spectra. This paper summarizes our current activities of the covariance evaluation work at LANL, including the actinide and light element data mainly for the criticality safety study and transmutation technology. The Bayesian method based on the Kalman filter technique, which combines uncertainties in the theoretical model and experimental data, is discussed.

  9. Model Selection Criteria for Missing-Data Problems Using the EM Algorithm

    PubMed Central

    Ibrahim, Joseph G.; Zhu, Hongtu; Tang, Niansheng

    2009-01-01

    We consider novel methods for the computation of model selection criteria in missing-data problems based on the output of the EM algorithm. The methodology is very general and can be applied to numerous situations involving incomplete data within an EM framework, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Toward this goal, we develop a class of information criteria for missing-data problems, called ICH,Q, which yields the Akaike information criterion and the Bayesian information criterion as special cases. The computation of ICH,Q requires an analytic approximation to a complicated function, called the H-function, along with output from the EM algorithm used in obtaining maximum likelihood estimates. The approximation to the H-function leads to a large class of information criteria, called ICH̃(k),Q. Theoretical properties of ICH̃(k),Q, including consistency, are investigated in detail. To eliminate the analytic approximation to the H-function, a computationally simpler approximation to ICH,Q, called ICQ, is proposed, the computation of which depends solely on the Q-function of the EM algorithm. Advantages and disadvantages of ICH̃(k),Q and ICQ are discussed and examined in detail in the context of missing-data problems. Extensive simulations are given to demonstrate the methodology and examine the small-sample and large-sample performance of ICH̃(k),Q and ICQ in missing-data problems. An AIDS data set also is presented to illustrate the proposed methodology. PMID:19693282

  10. 'Miss Frances', 'Miss Gail' and 'Miss Sandra' Crapemyrtles

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Agricultural Research Service, United States Department of Agriculture, announces the release to nurserymen of three new crapemyrtle cultivars named 'Miss Gail', 'Miss Frances', and 'Miss Sandra'. ‘Miss Gail’ resulted from a cross-pollination between ‘Catawba’ as the female parent and ‘Arapaho’ ...

  11. Help for Finding Missing Children.

    ERIC Educational Resources Information Center

    McCormick, Kathleen

    1984-01-01

    Efforts to locate missing children have expanded from a federal law allowing for entry of information into an F.B.I. computer system to companion bills before Congress for establishing a national missing child clearinghouse and a Justice Department center to help in conducting searches. Private organizations are also involved. (KS)

  12. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  13. A discrete time event-history approach to informative drop-out in mixed latent Markov models with covariates.

    PubMed

    Bartolucci, Francesco; Farcomeni, Alessio

    2015-03-01

    Mixed latent Markov (MLM) models represent an important tool of analysis of longitudinal data when response variables are affected by time-fixed and time-varying unobserved heterogeneity, in which the latter is accounted for by a hidden Markov chain. In order to avoid bias when using a model of this type in the presence of informative drop-out, we propose an event-history (EH) extension of the latent Markov approach that may be used with multivariate longitudinal data, in which one or more outcomes of a different nature are observed at each time occasion. The EH component of the resulting model is referred to the interval-censored drop-out, and bias in MLM modeling is avoided by correlated random effects, included in the different model components, which follow common latent distributions. In order to perform maximum likelihood estimation of the proposed model by the expectation-maximization algorithm, we extend the usual forward-backward recursions of Baum and Welch. The algorithm has the same complexity as the one adopted in cases of non-informative drop-out. We illustrate the proposed approach through simulations and an application based on data coming from a medical study about primary biliary cirrhosis in which there are two outcomes of interest, one continuous and the other binary. PMID:25227970

  14. Missing sink

    NASA Astrophysics Data System (ADS)

    White, M. Catherine

    Preliminary results at a Duke University research forest exposed to high levels of atmospheric carbon dioxide suggest plants might transfer excess CO2 into groundwater, where it could remain stored away for thousands of years. The finding, if bolstered by further research, might lower predicted impacts of global warming due to CO2 emissions from industry and vehicles as well as help solve a mystery: why there is 29% less CO2 in the atmosphere than current emissions inventories suggest there should be. “While it wouldn't explain all of that “missing sink,” if you push the calculations, it may explain maybe one-fifth of it,” said Duke botanist William Schlesinger.

  15. Covariant mutually unbiased bases

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro

    2016-06-01

    The connection between maximal sets of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space and finite phase-space geometries is well known. In this article, we classify MUBs according to their degree of covariance with respect to the natural symmetries of a finite phase-space, which are the group of its affine symplectic transformations. We prove that there exist maximal sets of MUBs that are covariant with respect to the full group only in odd prime-power dimensional spaces, and in this case, their equivalence class is actually unique. Despite this limitation, we show that in dimension 2r covariance can still be achieved by restricting to proper subgroups of the symplectic group, that constitute the finite analogues of the oscillator group. For these subgroups, we explicitly construct the unitary operators yielding the covariance.

  16. Covariant Noncommutative Field Theory

    SciTech Connect

    Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.

    2008-07-02

    The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.

  17. A hierarchical nest survival model integrating incomplete temporally varying covariates.

    PubMed

    Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A

    2013-11-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  18. A hierarchical nest survival model integrating incomplete temporally varying covariates

    USGS Publications Warehouse

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  19. A hierarchical nest survival model integrating incomplete temporally varying covariates.

    PubMed

    Converse, Sarah J; Royle, J Andrew; Adler, Peter H; Urbanek, Richard P; Barzen, Jeb A

    2013-11-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  20. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do

  1. The covariate-adjusted frequency plot.

    PubMed

    Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K

    2016-04-01

    Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter.

  2. Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data.

    PubMed

    Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M

    2006-11-01

    The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We

  3. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  4. A simulation-based marginal method for longitudinal data with dropout and mismeasured covariates.

    PubMed

    Yi, Grace Y

    2008-07-01

    Longitudinal data often contain missing observations and error-prone covariates. Extensive attention has been directed to analysis methods to adjust for the bias induced by missing observations. There is relatively little work on investigating the effects of covariate measurement error on estimation of the response parameters, especially on simultaneously accounting for the biases induced by both missing values and mismeasured covariates. It is not clear what the impact of ignoring measurement error is when analyzing longitudinal data with both missing observations and error-prone covariates. In this article, we study the effects of covariate measurement error on estimation of the response parameters for longitudinal studies. We develop an inference method that adjusts for the biases induced by measurement error as well as by missingness. The proposed method does not require the full specification of the distribution of the response vector but only requires modeling its mean and variance structures. Furthermore, the proposed method employs the so-called functional modeling strategy to handle the covariate process, with the distribution of covariates left unspecified. These features, plus the simplicity of implementation, make the proposed method very attractive. In this paper, we establish the asymptotic properties for the resulting estimators. With the proposed method, we conduct sensitivity analyses on a cohort data set arising from the Framingham Heart Study. Simulation studies are carried out to evaluate the impact of ignoring covariate measurement error and to assess the performance of the proposed method. PMID:18199691

  5. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  6. Covariant Bardeen perturbation formalism

    NASA Astrophysics Data System (ADS)

    Vitenti, S. D. P.; Falciano, F. T.; Pinto-Neto, N.

    2014-05-01

    In a previous work we obtained a set of necessary conditions for the linear approximation in cosmology. Here we discuss the relations of this approach with the so-called covariant perturbations. It is often argued in the literature that one of the main advantages of the covariant approach to describe cosmological perturbations is that the Bardeen formalism is coordinate dependent. In this paper we will reformulate the Bardeen approach in a completely covariant manner. For that, we introduce the notion of pure and mixed tensors, which yields an adequate language to treat both perturbative approaches in a common framework. We then stress that in the referred covariant approach, one necessarily introduces an additional hypersurface choice to the problem. Using our mixed and pure tensors approach, we are able to construct a one-to-one map relating the usual gauge dependence of the Bardeen formalism with the hypersurface dependence inherent to the covariant approach. Finally, through the use of this map, we define full nonlinear tensors that at first order correspond to the three known gauge invariant variables Φ, Ψ and Ξ, which are simultaneously foliation and gauge invariant. We then stress that the use of the proposed mixed tensors allows one to construct simultaneously gauge and hypersurface invariant variables at any order.

  7. Covariation and Quantifier Polarity: What Determines Causal Attribution in Vignettes?

    ERIC Educational Resources Information Center

    Majid, Asifa; Sanford, Anthony J.; Pickering, Martin J.

    2006-01-01

    Tests of causal attribution often use verbal vignettes, with covariation information provided through statements quantified with natural language expressions. The effect of covariation information has typically been taken to show that set size information affects attribution. However, recent research shows that quantifiers provide information…

  8. The covariant chiral ring

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Troost, Jan

    2016-03-01

    We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N = (4 , 4) supersymmetry in two dimensions. For seed target spaces K3 and T 4, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.

  9. Missing Funds

    ERIC Educational Resources Information Center

    Hassenpflug, Ann

    2012-01-01

    A high school drama coach informs assistant principal Laura Madison that the money students earned through fund-raising activities seems to have vanished and that the male assistant principal may be involved in the disappearance of the funds. Laura has to determine how to address this situation. She considers her past experiences with problematic…

  10. Missing persons-missing data: the need to collect antemortem dental records of missing persons.

    PubMed

    Blau, Soren; Hill, Anthony; Briggs, Christopher A; Cordner, Stephen M

    2006-03-01

    incorporated into the National Coroners Information System (NCIS) managed, on behalf of Australia's Coroners, by the Victorian Institute of Forensic Medicine. The existence of the NCIS would ensure operational collaboration in the implementation of the system and cost savings to Australian policing agencies involved in missing person inquiries. The implementation of such a database would facilitate timely and efficient reconciliation of clinical and postmortem dental records and have subsequent social and financial benefits.

  11. Missing persons-missing data: the need to collect antemortem dental records of missing persons.

    PubMed

    Blau, Soren; Hill, Anthony; Briggs, Christopher A; Cordner, Stephen M

    2006-03-01

    incorporated into the National Coroners Information System (NCIS) managed, on behalf of Australia's Coroners, by the Victorian Institute of Forensic Medicine. The existence of the NCIS would ensure operational collaboration in the implementation of the system and cost savings to Australian policing agencies involved in missing person inquiries. The implementation of such a database would facilitate timely and efficient reconciliation of clinical and postmortem dental records and have subsequent social and financial benefits. PMID:16566776

  12. Generalized Linear Covariance Analysis

    NASA Astrophysics Data System (ADS)

    Markley, F. Landis; Carpenter, J. Russell

    2009-01-01

    This paper presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into "solve-for" and "consider" parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  13. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  14. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  15. Unequal Covariate Group Means and the Analysis of Covariance.

    ERIC Educational Resources Information Center

    Hsu, Tse-Chi; Sebatane, E. Molapi

    1979-01-01

    A Monte Carlo technique was used to investigate the effect of the differences in covariate means among treatment groups on the significance level and the power of the F-test of the analysis of covariance. (Author/GDC)

  16. Using Analysis of Covariance (ANCOVA) with Fallible Covariates

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Aguinis, Herman

    2011-01-01

    Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…

  17. Covariant deformed oscillator algebras

    NASA Technical Reports Server (NTRS)

    Quesne, Christiane

    1995-01-01

    The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.

  18. Missing the Target: We Need to Focus on Informal Care Rather than Preschool. Evidence Speaks Reports, Vol 1, #19

    ERIC Educational Resources Information Center

    Loeb, Susanna

    2016-01-01

    Despite the widely-recognized benefits of early childhood experiences in formal settings that enrich the social and cognitive environments of children, many children--particularly infants and toddlers--spend their days in unregulated (or very lightly regulated) "informal" childcare settings. Over half of all one- and two-year-olds are…

  19. What Is Missing in Counseling Research? Reporting Missing Data

    ERIC Educational Resources Information Center

    Sterner, William R.

    2011-01-01

    Missing data have long been problematic in quantitative research. Despite the statistical and methodological advances made over the past 3 decades, counseling researchers fail to provide adequate information on this phenomenon. Interpreting the complex statistical procedures and esoteric language seems to be a contributing factor. An overview of…

  20. Impact of the 235U Covariance Data in Benchmark Calculations

    SciTech Connect

    Leal, Luiz C; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems.

  1. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  2. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  3. Covariance analysis of gamma ray spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-15

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  4. Covariance Analysis of Gamma Ray Spectra

    SciTech Connect

    Trainham, R.; Tinsley, J.

    2013-01-01

    The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.

  5. Covariant magnetic connection hypersurfaces

    NASA Astrophysics Data System (ADS)

    Pegoraro, F.

    2016-04-01

    > In the single fluid, non-relativistic, ideal magnetohydrodynamic (MHD) plasma description, magnetic field lines play a fundamental role by defining dynamically preserved `magnetic connections' between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D magnetic connection hypersurfaces in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when .

  6. OD Covariance in Conjunction Assessment: Introduction and Issues

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.; Duncan, M.

    2015-01-01

    Primary and secondary covariances combined and projected into conjunction plane (plane perpendicular to relative velocity vector at TCA) Primary placed on x-axis at (miss distance, 0) and represented by circle of radius equal to sum of both spacecraft circumscribing radiiZ-axis perpendicular to x-axis in conjunction plane Pc is portion of combined error ellipsoid that falls within the hard-body radius circle

  7. Web-Based Self-Assessment Health Tools: Who Are the Users and What Is the Impact of Missing Input Information?

    PubMed Central

    Cobain, Mark R; Newson, Rachel S

    2014-01-01

    Background Web-based health applications, such as self-assessment tools, can aid in the early detection and prevention of diseases. However, there are concerns as to whether such tools actually reach users with elevated disease risk (where prevention efforts are still viable), and whether inaccurate or missing information on risk factors may lead to incorrect evaluations. Objective This study aimed to evaluate (1) evaluate whether a Web-based cardiovascular disease (CVD) risk communication tool (Heart Age tool) was reaching users at risk of developing CVD, (2) the impact of awareness of total cholesterol (TC), HDL-cholesterol (HDL-C), and systolic blood pressure (SBP) values on the risk estimates, and (3) the key predictors of awareness and reporting of physiological risk factors. Methods Heart Age is a tool available via a free open access website. Data from 2,744,091 first-time users aged 21-80 years with no prior heart disease were collected from 13 countries in 2009-2011. Users self-reported demographic and CVD risk factor information. Based on these data, an individual’s 10-year CVD risk was calculated according to Framingham CVD risk models and translated into a Heart Age. This is the age for which the individual’s reported CVD risk would be considered “normal”. Depending on the availability of known TC, HDL-C, and SBP values, different algorithms were applied. The impact of awareness of TC, HDL-C, and SBP values on Heart Age was determined using a subsample that had complete risk factor information. Results Heart Age users (N=2,744,091) were mostly in their 20s (22.76%) and 40s (23.99%), female (56.03%), had multiple (mean 2.9, SD 1.4) risk factors, and a Heart Age exceeding their chronological age (mean 4.00, SD 6.43 years). The proportion of users unaware of their TC, HDL-C, or SBP values was high (77.47%, 93.03%, and 46.55% respectively). Lacking awareness of physiological risk factor values led to overestimation of Heart Age by an average 2

  8. Covariant Lyapunov vectors

    NASA Astrophysics Data System (ADS)

    Ginelli, Francesco; Chaté, Hugues; Livi, Roberto; Politi, Antonio

    2013-06-01

    Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi-Pasta-Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’.

  9. Stardust Navigation Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Menon, Premkumar R.

    2000-01-01

    The Stardust spacecraft was launched on February 7, 1999 aboard a Boeing Delta-II rocket. Mission participants include the National Aeronautics and Space Administration (NASA), the Jet Propulsion Laboratory (JPL), Lockheed Martin Astronautics (LMA) and the University of Washington. The primary objective of the mission is to collect in-situ samples of the coma of comet Wild-2 and return those samples to the Earth for analysis. Mission design and operational navigation for Stardust is performed by the Jet Propulsion Laboratory (JPL). This paper will describe the extensive JPL effort in support of the Stardust pre-launch analysis of the orbit determination component of the mission covariance study. A description of the mission and it's trajectory will be provided first, followed by a discussion of the covariance procedure and models. Predicted accuracy's will be examined as they relate to navigation delivery requirements for specific critical events during the mission. Stardust was launched into a heliocentric trajectory in early 1999. It will perform an Earth Gravity Assist (EGA) on January 15, 2001 to acquire an orbit for the eventual rendezvous with comet Wild-2. The spacecraft will fly through the coma (atmosphere) on the dayside of Wild-2 on January 2, 2004. At that time samples will be obtained using an aerogel collector. After the comet encounter Stardust will return to Earth when the Sample Return Capsule (SRC) will separate and land at the Utah Test Site (UTTR) on January 15, 2006. The spacecraft will however be deflected off into a heliocentric orbit. The mission is divided into three phases for the covariance analysis. They are 1) Launch to EGA, 2) EGA to Wild-2 encounter and 3) Wild-2 encounter to Earth reentry. Orbit determination assumptions for each phase are provided. These include estimated and consider parameters and their associated a-priori uncertainties. Major perturbations to the trajectory include 19 deterministic and statistical maneuvers

  10. CERAMIC: Case-Control Association Testing in Samples with Related Individuals, Based on Retrospective Mixed Model Analysis with Adjustment for Covariates

    PubMed Central

    Zhong, Sheng; McPeek, Mary Sara

    2016-01-01

    We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM) approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype), because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D) from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan. PMID:27695091

  11. Covariance Spectroscopy Applied to Nuclear Radiation Detection

    SciTech Connect

    Trainham, R., Tinsley, J., Keegan, R., Quam, W.

    2011-09-01

    Covariance spectroscopy is a method of processing second order moments of data to obtain information that is usually absent from average spectra. In nuclear radiation detection it represents a generalization of nuclear coincidence techniques. Correlations and fluctuations in data encode valuable information about radiation sources, transport media, and detection systems. Gaining access to the extra information can help to untangle complicated spectra, uncover overlapping peaks, accelerate source identification, and even sense directionality. Correlations existing at the source level are particularly valuable since many radioactive isotopes emit correlated gammas and neutrons. Correlations also arise from interactions within detector systems, and from scattering in the environment. In particular, correlations from Compton scattering and pair production within a detector array can be usefully exploited in scenarios where direct measurement of source correlations would be unfeasible. We present a covariance analysis of a few experimental data sets to illustrate the utility of the concept.

  12. Radiance Covariance and Climate Models

    NASA Technical Reports Server (NTRS)

    Haskins, R.; Goody, R.; Chen, L.

    1998-01-01

    Spectral Empirical Orhtogonal Functions (EOFs) derived from the covariance of satellite radiance spectra may be interpreted in terms of the vertical distribution of the covariance of temperature, water vapor, and clouds. The purpose of the investigation is to demonstrate the important constraints that resolved spectral radiances can place upon climate models.

  13. Covariant harmonic oscillators: 1973 revisited

    NASA Technical Reports Server (NTRS)

    Noz, M. E.

    1993-01-01

    Using the relativistic harmonic oscillator, a physical basis is given to the phenomenological wave function of Yukawa which is covariant and normalizable. It is shown that this wave function can be interpreted in terms of the unitary irreducible representations of the Poincare group. The transformation properties of these covariant wave functions are also demonstrated.

  14. The effect of mood on detection of covariation.

    PubMed

    Braverman, Julia

    2005-11-01

    The purpose of this research is to explore the effect of mood on the detection of covariation. Predictions were based on an assumption that sad moods facilitate a data-driven information elaboration style and careful data scrutinizing, whereas happy moods predispose individuals toward top-down information processing and decrease the attention given to cognitive tasks. The primary dependent variable involved is the detection of covariation between facial features and personal information and the use of this information for evaluating new target faces. The findings support the view that sad mood facilitates both conscious and unconscious detection of covariation because it increases motivation to engage in the task. Limiting available cognitive resources does not eliminate the effect of mood on the detecting of covariation.

  15. Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials

    DOE Data Explorer

    The Low-fidelity Covariance Project (Low-Fi) was funded in FY07-08 by DOEÆs Nuclear Criticality Safety Program (NCSP). The project was a collaboration among ANL, BNL, LANL, and ORNL. The motivation for the Low-Fi project stemmed from an imbalance in supply and demand of covariance data. The interest in, and demand for, covariance data has been in a continual uptrend over the past few years. Requirements to understand application-dependent uncertainties in simulated quantities of interest have led to the development of sensitivity / uncertainty and data adjustment software such as TSUNAMI [1] at Oak Ridge. To take full advantage of the capabilities of TSUNAMI requires general availability of covariance data. However, the supply of covariance data has not been able to keep up with the demand. This fact is highlighted by the observation that the recent release of the much-heralded ENDF/B-VII.0 included covariance data for only 26 of the 393 neutron evaluations (which is, in fact, considerably less covariance data than was included in the final ENDF/B-VI release).[Copied from R.C. Little et al., "Low-Fidelity Covariance Project", Nuclear Data Sheets 109 (2008) 2828-2833] The Low-Fi covariance data are now available at the National Nuclear Data Center. They are separate from ENDF/B-VII.0 and the NNDC warns that this information is not approved by CSEWG. NNDC describes the contents of this collection as: "Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn

  16. The incredible shrinking covariance estimator

    NASA Astrophysics Data System (ADS)

    Theiler, James

    2012-05-01

    Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.

  17. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  18. Replacing a Missing Tooth

    MedlinePlus

    ... majority of patients with clefts will require full orthodontic treatment, especially if the cleft has passed through ... later replacement of the missing lateral incisor. During orthodontic treatment, an artificial tooth may be attached to ...

  19. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  20. Covariant Closed String Coherent States

    SciTech Connect

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops.

  1. Covariant closed string coherent states.

    PubMed

    Hindmarsh, Mark; Skliros, Dimitri

    2011-02-25

    We give the first construction of covariant coherent closed string states, which may be identified with fundamental cosmic strings. We outline the requirements for a string state to describe a cosmic string, and provide an explicit and simple map that relates three different descriptions: classical strings, light cone gauge quantum states, and covariant vertex operators. The resulting coherent state vertex operators have a classical interpretation and are in one-to-one correspondence with arbitrary classical closed string loops. PMID:21405564

  2. Covariance tracking: architecture optimizations for embedded systems

    NASA Astrophysics Data System (ADS)

    Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan

    2014-12-01

    Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.

  3. Development of Covariance Capabilities in EMPIRE Code

    SciTech Connect

    Herman, M. Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-12-15

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  4. Development of covariance capabilities in EMPIRE code

    SciTech Connect

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  5. Missing Drivers with Dementia: Antecedents and Recovery

    PubMed Central

    Rowe, Meredeth A.; Greenblum, Catherine A.; Boltz, Marie; Galvin, James E.

    2013-01-01

    OBJECTIVES To determine the circumstance in which persons with dementia become lost while driving, how missing drivers are found, and how Silver Alert notificationsare instrumental in those discoveries. DESIGN A retrospective, descriptive study. SETTING Retrospective record review. PARTICIPANTS Conducted using 156 records from the Florida Silver Alert program for the time period October, 2008 through May 2010. These alerts were issued in Florida for a missing driver with dementia. MEASUREMENTS Information derived from the reports on characteristics of the missing driver, antecedents to missing event and discovery of a missing driver. RESULTS and CONCLUSION The majority of missing drivers were males, with ages ranging from 58’94, who were being cared for by a spouse. Most drivers became lost on routine, caregiver-sanctioned trips to usual locations. Only 15% were in the act of driving when found with most being found in or near a parked car and the large majority were found by law enforcement officers. Only 40% were found in the county they went missing and 10% were found in a different state. Silver Alert notifications were most effective for law enforcement; citizen alerts resulted in a few discoveries. There was a 5% mortality rate in the study population with those living alone more likely to be found dead than alive. An additional 15% were found in dangerous situations such as stopped on railroad tracks. Thirty-two percent had documented driving or dangerous errors such as, driving thewrong way or into secluded areas, or walking in or near roadways. PMID:23134069

  6. Covariance Matrix Evaluations for Independent Mass Fission Yields

    SciTech Connect

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.

  7. Covariance Matrix Evaluations for Independent Mass Fission Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.; Sumini, M.

    2015-01-01

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of 235U(nth, f) and 239Pu(nth, f) reactions.

  8. Structural constraints identified with covariation analysis in ribosomal RNA.

    PubMed

    Shang, Lei; Xu, Weijia; Ozer, Stuart; Gutell, Robin R

    2012-01-01

    Covariation analysis is used to identify those positions with similar patterns of sequence variation in an alignment of RNA sequences. These constraints on the evolution of two positions are usually associated with a base pair in a helix. While mutual information (MI) has been used to accurately predict an RNA secondary structure and a few of its tertiary interactions, early studies revealed that phylogenetic event counting methods are more sensitive and provide extra confidence in the prediction of base pairs. We developed a novel and powerful phylogenetic events counting method (PEC) for quantifying positional covariation with the Gutell lab's new RNA Comparative Analysis Database (rCAD). The PEC and MI-based methods each identify unique base pairs, and jointly identify many other base pairs. In total, both methods in combination with an N-best and helix-extension strategy identify the maximal number of base pairs. While covariation methods have effectively and accurately predicted RNAs secondary structure, only a few tertiary structure base pairs have been identified. Analysis presented herein and at the Gutell lab's Comparative RNA Web (CRW) Site reveal that the majority of these latter base pairs do not covary with one another. However, covariation analysis does reveal a weaker although significant covariation between sets of nucleotides that are in proximity in the three-dimensional RNA structure. This reveals that covariation analysis identifies other types of structural constraints beyond the two nucleotides that form a base pair.

  9. Levy Matrices and Financial Covariances

    NASA Astrophysics Data System (ADS)

    Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail

    2003-10-01

    In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.

  10. Automatic Classification of Variable Stars in Catalogs with Missing Data

    NASA Astrophysics Data System (ADS)

    Pichara, Karim; Protopapas, Pavlos

    2013-11-01

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  11. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    SciTech Connect

    Pichara, Karim; Protopapas, Pavlos

    2013-11-10

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  12. Posterior covariance versus analysis error covariance in variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shutyaev, Victor; Gejadze, Igor; Le Dimet, Francois-Xavier

    2013-04-01

    The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition function (analysis) [1]. The data contain errors (observation and background errors), hence there is an error in the analysis. For mildly nonlinear dynamics, the analysis error covariance can be approximated by the inverse Hessian of the cost functional in the auxiliary data assimilation problem [2], whereas for stronger nonlinearity - by the 'effective' inverse Hessian [3, 4]. However, it has been noticed that the analysis error covariance is not the posterior covariance from the Bayesian perspective. While these two are equivalent in the linear case, the difference may become significant in practical terms with the nonlinearity level rising. For the proper Bayesian posterior covariance a new approximation via the Hessian of the original cost functional is derived and its 'effective' counterpart is introduced. An approach for computing the mentioned estimates in the matrix-free environment using Lanczos method with preconditioning is suggested. Numerical examples which validate the developed theory are presented for the model governed by the Burgers equation with a nonlinear viscous term. The authors acknowledge the funding through the Natural Environment Research Council (NERC grant NE/J018201/1), the Russian Foundation for Basic Research (project 12-01-00322), the Ministry of Education and Science of Russia, the MOISE project (CNRS, INRIA, UJF, INPG) and Région Rhône-Alpes. References: 1. Le Dimet F.X., Talagrand O. Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus, 1986, v.38A, pp.97-110. 2. Gejadze I., Le Dimet F.-X., Shutyaev V. On analysis error covariances in variational data assimilation. SIAM J. Sci. Computing, 2008, v.30, no.4, pp.184-1874. 3. Gejadze I.Yu., Copeland G.J.M., Le Dimet F.-X., Shutyaev V. Computation of the analysis error

  13. Bayesian modeling of the covariance structure for irregular longitudinal data using the partial autocorrelation function.

    PubMed

    Su, Li; Daniels, Michael J

    2015-05-30

    In long-term follow-up studies, irregular longitudinal data are observed when individuals are assessed repeatedly over time but at uncommon and irregularly spaced time points. Modeling the covariance structure for this type of data is challenging, as it requires specification of a covariance function that is positive definite. Moreover, in certain settings, careful modeling of the covariance structure for irregular longitudinal data can be crucial in order to ensure no bias arises in the mean structure. Two common settings where this occurs are studies with 'outcome-dependent follow-up' and studies with 'ignorable missing data'. 'Outcome-dependent follow-up' occurs when individuals with a history of poor health outcomes had more follow-up measurements, and the intervals between the repeated measurements were shorter. When the follow-up time process only depends on previous outcomes, likelihood-based methods can still provide consistent estimates of the regression parameters, given that both the mean and covariance structures of the irregular longitudinal data are correctly specified and no model for the follow-up time process is required. For 'ignorable missing data', the missing data mechanism does not need to be specified, but valid likelihood-based inference requires correct specification of the covariance structure. In both cases, flexible modeling approaches for the covariance structure are essential. In this paper, we develop a flexible approach to modeling the covariance structure for irregular continuous longitudinal data using the partial autocorrelation function and the variance function. In particular, we propose semiparametric non-stationary partial autocorrelation function models, which do not suffer from complex positive definiteness restrictions like the autocorrelation function. We describe a Bayesian approach, discuss computational issues, and apply the proposed methods to CD4 count data from a pediatric AIDS clinical trial.

  14. Miss Dove Rediviva.

    ERIC Educational Resources Information Center

    Hawley, Richard A.

    1995-01-01

    Suggests that a way out of the current malaise of American education may be to locate educational excellence in accessible American fiction. Discusses Frances Gray Patton's "Good Morning, Miss Dove," in which the central character is an elementary school geography teacher. (RS)

  15. Tracking the Missing Biologist.

    ERIC Educational Resources Information Center

    Morgenstern, Douglas; Murray, Janet H.

    1995-01-01

    Describes an interactive computer simulation where the student assumes the role of a reporter searching for a missing biologist in Columbia. The search involves simulated news broadcasts, documentary footage, and interviews with Colombians, all in Spanish. The student accesses help via electronic archives (translated words) and faxing the editor.…

  16. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  17. Are Eddy Covariance series stationary?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spectral analysis via a discrete Fourier transform is used often to examine eddy covariance series for cycles (eddies) of interest. Generally the analysis is performed on hourly or half-hourly data sets collected at 10 or 20 Hz. Each original series is often assumed to be stationary. Also automated ...

  18. Covariation Neglect among Novice Investors

    ERIC Educational Resources Information Center

    Hedesstrom, Ted Martin; Svedsater, Henrik; Garling, Tommy

    2006-01-01

    In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns…

  19. Gaussian covariance matrices for anisotropic galaxy clustering measurements

    NASA Astrophysics Data System (ADS)

    Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio

    2016-04-01

    Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.

  20. [Clinical research XIX. From clinical judgment to analysis of covariance].

    PubMed

    Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Moreno, Jorge; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2014-01-01

    The analysis of covariance (ANCOVA) is based on the general linear models. This technique involves a regression model, often multiple, in which the outcome is presented as a continuous variable, the independent variables are qualitative or are introduced into the model as dummy or dichotomous variables, and factors for which adjustment is required (covariates) can be in any measurement level (i.e. nominal, ordinal or continuous). The maneuvers can be entered into the model as 1) fixed effects, or 2) random effects. The difference between fixed effects and random effects depends on the type of information we want from the analysis of the effects. ANCOVA effect separates the independent variables from the effect of co-variables, i.e., corrects the dependent variable eliminating the influence of covariates, given that these variables change in conjunction with maneuvers or treatments, affecting the outcome variable. ANCOVA should be done only if it meets three assumptions: 1) the relationship between the covariate and the outcome is linear, 2) there is homogeneity of slopes, and 3) the covariate and the independent variable are independent from each other.

  1. Realization of the optimal phase-covariant quantum cloning machine

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2005-12-15

    In several quantum information (QI) phenomena of large technological importance the information is carried by the phase of the quantum superposition states, or qubits. The phase-covariant cloning machine (PQCM) addresses precisely the problem of optimally copying these qubits with the largest attainable 'fidelity'. We present a general scheme which realizes the 1{yields}3 phase covariant cloning process by a combination of three different QI processes: the universal cloning, the NOT gate, and the projection over the symmetric subspace of the output qubits. The experimental implementation of a PQCM for polarization encoded qubits, the first ever realized with photons, is reported.

  2. Realization of the optimal phase-covariant quantum cloning machine

    NASA Astrophysics Data System (ADS)

    Sciarrino, Fabio; de Martini, Francesco

    2005-12-01

    In several quantum information (QI) phenomena of large technological importance the information is carried by the phase of the quantum superposition states, or qubits. The phase-covariant cloning machine (PQCM) addresses precisely the problem of optimally copying these qubits with the largest attainable “fidelity.” We present a general scheme which realizes the 1→3 phase covariant cloning process by a combination of three different QI processes: the universal cloning, the NOT gate, and the projection over the symmetric subspace of the output qubits. The experimental implementation of a PQCM for polarization encoded qubits, the first ever realized with photons, is reported.

  3. Mardia's Multivariate Kurtosis with Missing Data

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Lambert, Paul L.; Fouladi, Rachel T.

    2004-01-01

    Mardia's measure of multivariate kurtosis has been implemented in many statistical packages commonly used by social scientists. It provides important information on whether a commonly used multivariate procedure is appropriate for inference. Many statistical packages also have options for missing data. However, there is no procedure for applying…

  4. Minimal unitary (covariant) scattering theory

    SciTech Connect

    Lindesay, J.V.; Markevich, A.

    1983-06-01

    In the minimal three particle equations developed by Lindesay the two body input amplitude was an on shell relativistic generalization of the non-relativistic scattering model characterized by a single mass parameter ..mu.. which in the two body (m + m) system looks like an s-channel bound state (..mu.. < 2m) or virtual state (..mu.. > 2m). Using this driving term in covariant Faddeev equations generates a rich covariant and unitary three particle dynamics. However, the simplest way of writing the relativisitic generalization of the Faddeev equations can take the on shell Mandelstam parameter s = 4(q/sup 2/ + m/sup 2/), in terms of which the two particle input is expressed, to negative values in the range of integration required by the dynamics. This problem was met in the original treatment by multiplying the two particle input amplitude by THETA(s). This paper provides what we hope to be a more direct way of meeting the problem.

  5. Understanding covariate shift in model performance

    PubMed Central

    McGaughey, Georgia; Walters, W. Patrick; Goldman, Brian

    2016-01-01

    Three (3) different methods (logistic regression, covariate shift and k-NN) were applied to five (5) internal datasets and one (1) external, publically available dataset where covariate shift existed. In all cases, k-NN’s performance was inferior to either logistic regression or covariate shift. Surprisingly, there was no obvious advantage for using covariate shift to reweight the training data in the examined datasets. PMID:27803797

  6. Covariant jump conditions in electromagnetism

    NASA Astrophysics Data System (ADS)

    Itin, Yakov

    2012-02-01

    A generally covariant four-dimensional representation of Maxwell's electrodynamics in a generic material medium can be achieved straightforwardly in the metric-free formulation of electromagnetism. In this setup, the electromagnetic phenomena are described by two tensor fields, which satisfy Maxwell's equations. A generic tensorial constitutive relation between these fields is an independent ingredient of the theory. By use of different constitutive relations (local and non-local, linear and non-linear, etc.), a wide area of applications can be covered. In the current paper, we present the jump conditions for the fields and for the energy-momentum tensor on an arbitrarily moving surface between two media. From the differential and integral Maxwell equations, we derive the covariant boundary conditions, which are independent of any metric and connection. These conditions include the covariantly defined surface current and are applicable to an arbitrarily moving smooth curved boundary surface. As an application of the presented jump formulas, we derive a Lorentzian type metric as a condition for existence of the wave front in isotropic media. This result holds for ordinary materials as well as for metamaterials with negative material constants.

  7. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  8. Realistic Covariance Prediction For the Earth Science Constellations

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra

  9. Lorentz-covariant dissipative Lagrangian systems

    NASA Technical Reports Server (NTRS)

    Kaufman, A. N.

    1985-01-01

    The concept of dissipative Hamiltonian system is converted to Lorentz-covariant form, with evolution generated jointly by two scalar functionals, the Lagrangian action and the global entropy. A bracket formulation yields the local covariant laws of energy-momentum conservation and of entropy production. The formalism is illustrated by a derivation of the covariant Landau kinetic equation.

  10. Covariance control of discrete stochastic bilinear systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Kherat, S. M.; Yaz, E.

    1991-01-01

    The covariances that certain bilinear stochastic discrete time systems may possess are characterized. An explicit parameterization of all controllers that assign such covariances is given. The state feedback assignability and robustness of the system are discussed from a deterministic point of view. This work extends the theory of covariance control for continuous time bilinear systems to a discrete time setting.

  11. Relative error covariance analysis techniques and application

    NASA Technical Reports Server (NTRS)

    Wolff, Peter, J.; Williams, Bobby G.

    1988-01-01

    A technique for computing the error covariance of the difference between two estimators derived from different (possibly overlapping) data arcs is presented. The relative error covariance is useful for predicting the achievable consistency between Kalman-Bucy filtered estimates generated from two (not necessarily disjoint) data sets. The relative error covariance analysis technique is then applied to a Venus Orbiter simulation.

  12. Missing people, migrants, identification and human rights.

    PubMed

    Nuzzolese, E

    2012-11-30

    The increasing volume and complexities of migratory flow has led to a range of problems such as human rights issues, public health, disease and border control, and also the regulatory processes. As result of war or internal conflicts missing person cases and management have to be regarded as a worldwide issue. On the other hand, even in peace, the issue of a missing person is still relevant. In 2007 the Italian Ministry of Interior nominated an extraordinary commissar in order to analyse and assess the total number of unidentified recovered bodies and verify the extent of the phenomena of missing persons, reported as 24,912 people in Italy (updated 31 December 2011). Of these 15,632 persons are of foreigner nationalities and are still missing. The census of the unidentified bodies revealed a total of 832 cases recovered in Italy since the year 1974. These bodies/human remains received a regular autopsy and were buried as 'corpse without name". In Italy judicial autopsy is performed to establish cause of death and identity, but odontology and dental radiology is rarely employed in identification cases. Nevertheless, odontologists can substantiate the identification through the 'biological profile' providing further information that can narrow the search to a smaller number of missing individuals even when no ante mortem dental data are available. The forensic dental community should put greater emphasis on the role of the forensic odontology as a tool for humanitarian action of unidentified individuals and best practise in human identification.

  13. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  14. Recurrence Analysis of Eddy Covariance Fluxes

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael

    2015-04-01

    The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.

  15. The Impact of Missing Data on Sample Reliability Estimates: Implications for Reliability Reporting Practices

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2004-01-01

    A method for incorporating maximum likelihood (ML) estimation into reliability analyses with item-level missing data is outlined. An ML estimate of the covariance matrix is first obtained using the expectation maximization (EM) algorithm, and coefficient alpha is subsequently computed using standard formulae. A simulation study demonstrated that…

  16. A Two-Stage Approach to Missing Data: Theory and Application to Auxiliary Variables

    ERIC Educational Resources Information Center

    Savalei, Victoria; Bentler, Peter M.

    2009-01-01

    A well-known ad-hoc approach to conducting structural equation modeling with missing data is to obtain a saturated maximum likelihood (ML) estimate of the population covariance matrix and then to use this estimate in the complete data ML fitting function to obtain parameter estimates. This 2-stage (TS) approach is appealing because it minimizes a…

  17. Comparison of Modern Methods for Analyzing Repeated Measures Data with Missing Values

    ERIC Educational Resources Information Center

    Vallejo, G.; Fernandez, M. P.; Livacic-Rojas, P. E.; Tuero-Herrero, E.

    2011-01-01

    Missing data are a pervasive problem in many psychological applications in the real world. In this article we study the impact of dropout on the operational characteristics of several approaches that can be easily implemented with commercially available software. These approaches include the covariance pattern model based on an unstructured…

  18. Covariance Structure Models for Gene Expression Microarray Data

    ERIC Educational Resources Information Center

    Xie, Jun; Bentler, Peter M.

    2003-01-01

    Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…

  19. Electromagnetics: from Covariance to Cloaking

    NASA Astrophysics Data System (ADS)

    McCall, M. W.

    2008-10-01

    An overview of some topical themes in electromagnetism is presented. Recent interest in metamaterials research has enabled earlier theoretical speculations concerning electromagnetic media displaying a negative refractive index to be experimentally realized. Such media can act as perfect lenses. The mathematical criterion of what signals such unusual electromagnetic behavior is discussed, showing that a covariant (or coordinate free) perspective is essential. Coordinate transformations have also become significant in the theme of transformation optics, where the interplay between a coordinate transformation and metamaterial behavior has led to the concept of an electromagnetic cloak.

  20. Phase-covariant quantum benchmarks

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Aspachs, M.; Muñoz-Tapia, R.; Bagan, E.

    2009-05-01

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  1. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  2. Estimated Environmental Exposures for MISSE-3 and MISSE-4

    NASA Technical Reports Server (NTRS)

    Finckenor, Miria M.; Pippin, Gary; Kinard, William H.

    2008-01-01

    Describes the estimated environmental exposure for MISSE-2 and MISSE-4. These test beds, attached to the outside of the International Space Station, were planned for 3 years of exposure. This was changed to 1 year after MISSE-1 and -2 were in space for 4 years. MISSE-3 and -4 operate in a low Earth orbit space environment, which exposes them to a variety of assaults including atomic oxygen, ultraviolet radiation, particulate radiation, thermal cycling, and meteoroid/space debris impact, as well as contamination associated with proximity to an active space station. Measurements and determinations of atomic oxygen fluences, solar UV exposure levels, molecular contamination levels, and particulate radiation are included.

  3. Neutron Cross Section Covariances: Recent Workshop and Advanced Reactor Systems

    NASA Astrophysics Data System (ADS)

    Oblozinsky, Pavel

    2008-10-01

    The recent Workshop on Neutron Cross Section Covariances, organized by BNL and attended by more than 50 scientists, responded to demands of many user groups, including advanced reactor systems, for uncertainty and correlation information. These demands can be explained by considerable progress in advanced neutronics simulation that probe covariances and their impact on design and operational margins of nuclear systems. The Workshop addressed evaluation methodology, recent evaluations as well as user's perspective, marking era of revival of covariance development that started some two years ago. We illustrate urgent demand for covariances in the case of advanced reactor systems, including fast actinide burner under GNEP, new generation of power reactors, Gen-IV, and reactors under AFCI. A common feature of many of these systems is presence of large amount of minor actinides and fission products that require improved nuclear data. Advanced simulation codes rely on quality input, to be obtained by adjusting the data library, such as the new ENDF/B-VII.0, by considering integral experiments as currently pursued by GNEP. To this end the nuclear data community is developing covariances for formidable amount of 112 materials (isotopes).

  4. Data Covariances from R-Matrix Analyses of Light Nuclei

    SciTech Connect

    Hale, G.M. Paris, M.W.

    2015-01-15

    After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances ({sup 5}He) and with many resonances ({sup 13}C ). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.

  5. Evaluation of Covariances for Actinides and Light Elements at LANL

    SciTech Connect

    Kawano, T. Talou, P.; Young, P.G.; Hale, G.; Chadwick, M.B.; Little, R.C.

    2008-12-15

    Los Alamos evaluates covariances for the evaluated nuclear data library (ENDF), mainly for actinides above the resonance region and for light elements in the entire energy range. We also develop techniques to evaluate the covariance data, like Bayesian and least-squares fitting methods, which are important to explore the uncertainty information on different types of physical quantities such as elastic scattering angular distribution, or prompt neutron fission spectra. This paper summarizes our current activities of the covariance evaluation work at LANL, including the actinide and light element data mainly for criticality safety studies and transmutation technology. The Bayesian method based on the Kalman filter technique, which combines uncertainties in the theoretical model and experimental data, is discussed.

  6. Identifying Heat Waves in Florida: Considerations of Missing Weather Data

    PubMed Central

    Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.

    2015-01-01

    Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198

  7. What Darwin missed

    NASA Astrophysics Data System (ADS)

    Campbell, A. K.

    2003-07-01

    Throughout his life, Fred Hoyle had a keen interest in evolution. He argued that natural selection by small, random change, as conceived by Charles Darwin and Alfred Russel Wallace, could not explain either the origin of life or the origin of a new protein. The idea of natural selection, Hoyle told us, wasn't even Darwin's original idea in the first place. Here, in honour of Hoyle's analysis, I propose a solution to Hoyle's dilemma. His solution was life from space - panspermia. But the real key to understanding natural selection is `molecular biodiversity'. This explains the things Darwin missed - the origin of species and the origin of extinction. It is also a beautiful example of the mystery disease that afflicted Darwin for over 40 years, for which we now have an answer.

  8. The Concept of Missing Incidents in Persons with Dementia.

    PubMed

    Rowe, Meredeth; Houston, Amy; Molinari, Victor; Bulat, Tatjana; Bowen, Mary Elizabeth; Spring, Heather; Mutolo, Sandra; McKenzie, Barbara

    2015-11-10

    Behavioral symptoms of dementia often present the greatest challenge for informal caregivers. One behavior, that is a constant concern for caregivers, is the person with dementia leaving a designated area such that their whereabouts become unknown to the caregiver or a missing incident. Based on an extensive literature review and published findings of their own research, members of the International Consortium on Wandering and Missing Incidents constructed a preliminary missing incidents model. Examining the evidence base, specific factors within each category of the model were further described, reviewed and modified until consensus was reached regarding the final model. The model begins to explain in particular the variety of antecedents that are related to missing incidents. The model presented in this paper is designed to be heuristic and may be used to stimulate discussion and the development of effective preventative and response strategies for missing incidents among persons with dementia.

  9. Covariant calculation of strange decays of baryon resonances

    SciTech Connect

    Sengl, B.; Melde, T.; Plessas, W.

    2007-09-01

    We present results for kaon decay widths of baryon resonances from a relativistic study with constituent quark models. The calculations are done in the point form of Poincare-invariant quantum mechanics with a spectator-model decay operator. We obtain covariant predictions of the Goldstone-boson-exchange and a variant of the one-gluon-exchange constituent quark models for all kaon decay widths of established baryon resonances. They are generally characterized by underestimating the available experimental data. In particular, the widths of kaon decays with decreasing strangeness in the baryon turn out to be extremely small. We also consider the nonrelativistic limit, leading to the familiar elementary emission model, and demonstrate the importance of relativistic effects. It is found that the nonrelativistic approach evidently misses sensible influences from Lorentz boosts and some essential spin-coupling terms.

  10. Covariate pharmacokinetic model building in oncology and its potential clinical relevance.

    PubMed

    Joerger, Markus

    2012-03-01

    When modeling pharmacokinetic (PK) data, identifying covariates is important in explaining interindividual variability, and thus increasing the predictive value of the model. Nonlinear mixed-effects modeling with stepwise covariate modeling is frequently used to build structural covariate models, and the most commonly used software-NONMEM-provides estimations for the fixed-effect parameters (e.g., drug clearance), interindividual and residual unidentified random effects. The aim of covariate modeling is not only to find covariates that significantly influence the population PK parameters, but also to provide dosing recommendations for a certain drug under different conditions, e.g., organ dysfunction, combination chemotherapy. A true covariate is usually seen as one that carries unique information on a structural model parameter. Covariate models have improved our understanding of the pharmacology of many anticancer drugs, including busulfan or melphalan that are part of high-dose pretransplant treatments, the antifolate methotrexate whose elimination is strongly dependent on GFR and comedication, the taxanes and tyrosine kinase inhibitors, the latter being subject of cytochrome p450 3A4 (CYP3A4) associated metabolism. The purpose of this review article is to provide a tool to help understand population covariate analysis and their potential implications for the clinic. Accordingly, several population covariate models are listed, and their clinical relevance is discussed. The target audience of this article are clinical oncologists with a special interest in clinical and mathematical pharmacology.

  11. Identifying sources of uncertainty using covariance analysis

    NASA Astrophysics Data System (ADS)

    Hyslop, N. P.; White, W. H.

    2010-12-01

    Atmospheric aerosol monitoring often includes performing multiple analyses on a collected sample. Some common analyses resolve suites of elements or compounds (e.g., spectrometry, chromatography). Concentrations are determined through multi-step processes involving sample collection, physical or chemical analysis, and data reduction. Uncertainties in the individual steps propagate into uncertainty in the calculated concentration. The assumption in most treatments of measurement uncertainty is that errors in the various species concentrations measured in a sample are random and therefore independent of each other. This assumption is often not valid in speciated aerosol data because some errors can be common to multiple species. For example, an error in the sample volume will introduce a common error into all species concentrations determined in the sample, and these errors will correlate with each other. Measurement programs often use paired (collocated) measurements to characterize the random uncertainty in their measurements. Suites of paired measurements provide an opportunity to go beyond the characterization of measurement uncertainties in individual species to examine correlations amongst the measurement uncertainties in multiple species. This additional information can be exploited to distinguish sources of uncertainty that affect all species from those that only affect certain subsets or individual species. Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) program are used to illustrate these ideas. Nine analytes commonly detected in the IMPROVE network were selected for this analysis. The errors in these analytes can be reasonably modeled as multiplicative, and the natural log of the ratio of concentrations measured on the two samplers provides an approximation of the error. Figure 1 shows the covariation of these log ratios among the different analytes for one site. Covariance is strongest amongst the dust element (Fe, Ca, and

  12. Defining a conceptual framework for near-miss maternal morbidity.

    PubMed

    Geller, Stacie E; Rosenberg, Deborah; Cox, Suzanne M; Kilpatrick, Sarah

    2002-01-01

    Maternal mortality is the major indicator used to monitor maternal health in the United States. For every woman who dies, however, many suffer serious life-threatening complications of pregnancy. Yet relatively little attention has been given to identifying a general category of morbidities that could be called near misses. Characterizing near-miss morbidity is valuable for monitoring the quality of hospital-based obstetric care and for assessing the incidence of life-threatening complications. Cases of near-miss morbidity also provide an appropriate comparison group both for dinical case review and for epidemiologic analysis. This paper presents an initial framework and a process for the definition and identification of near-miss morbidity that minimizes loss of information yet has practical utility. A clinical review team classified 22 of 186 women as near misses and 164 as other severe morbidity. A quantitative score classified 28 women as near misses and 156 as other severe morbidity. Precise classification of near-miss morbidity is the first step in analyzing factors that may differentiate survival from death on the continuum from morbidity to mortality. Ultimately, a methodology for the identification and analysis of near-miss morbidity will allow for integrated morbidity and mortality reviews that can then be institutionalized. The results will serve as important models for other researchers, state health agencies, and regionalized perinatal systems that are engaged in morbidity and mortality surveillance.

  13. Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    George, E., Chan, K.

    2012-09-01

    Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for

  14. Patient Portals as a Means of Information and Communication Technology Support to Patient-Centric Care Coordination – the Missing Evidence and the Challenges of Evaluation

    PubMed Central

    Georgiou, Andrew; Hyppönen, Hannele; Ammenwerth, Elske; de Keizer, Nicolette; Magrabi, Farah; Scott, Philip

    2015-01-01

    Summary Objectives To review the potential contribution of Information and Communication Technology (ICT) to enable patient-centric and coordinated care, and in particular to explore the role of patient portals as a developing ICT tool, to assess the available evidence, and to describe the evaluation challenges. Methods Reviews of IMIA, EFMI, and other initiatives, together with literature reviews. Results We present the progression from care coordination to care integration, and from patient-centric to person-centric approaches. We describe the different roles of ICT as an enabler of the effective presentation of information as and when needed. We focus on the patient’s role as a co-producer of health as well as the focus and purpose of care. We discuss the need for changing organisational processes as well as the current mixed evidence regarding patient portals as a logical tool, and the reasons for this dichotomy, together with the evaluation principles supported by theoretical frameworks so as to yield robust evidence. Conclusions There is expressed commitment to coordinated care and to putting the patient in the centre. However to achieve this, new interactive patient portals will be needed to enable peer communication by all stakeholders including patients and professionals. Few portals capable of this exist to date. The evaluation of these portals as enablers of system change, rather than as simple windows into electronic records, is at an early stage and novel evaluation approaches are needed. PMID:26123909

  15. Group Theory of Covariant Harmonic Oscillators

    ERIC Educational Resources Information Center

    Kim, Y. S.; Noz, Marilyn E.

    1978-01-01

    A simple and concrete example for illustrating the properties of noncompact groups is presented. The example is based on the covariant harmonic-oscillator formalism in which the relativistic wave functions carry a covariant-probability interpretation. This can be used in a group theory course for graduate students who have some background in…

  16. Quality Quantification of Evaluated Cross Section Covariances

    SciTech Connect

    Varet, S.; Dossantos-Uzarralde, P.

    2015-01-15

    Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.

  17. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  18. missForest: Nonparametric missing value imputation using random forest

    NASA Astrophysics Data System (ADS)

    Stekhoven, Daniel J.

    2015-05-01

    missForest imputes missing values particularly in the case of mixed-type data. It uses a random forest trained on the observed values of a data matrix to predict the missing values. It can be used to impute continuous and/or categorical data including complex interactions and non-linear relations. It yields an out-of-bag (OOB) imputation error estimate without the need of a test set or elaborate cross-validation and can be run in parallel to save computation time. missForest has been used to, among other things, impute variable star colors in an All-Sky Automated Survey (ASAS) dataset of variable stars with no NOMAD match.

  19. Covariance matrices for use in criticality safety predictability studies

    SciTech Connect

    Derrien, H.; Larson, N.M.; Leal, L.C.

    1997-09-01

    Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.

  20. Treatment decisions based on scalar and functional baseline covariates.

    PubMed

    Ciarleglio, Adam; Petkova, Eva; Ogden, R Todd; Tarpey, Thaddeus

    2015-12-01

    The amount and complexity of patient-level data being collected in randomized-controlled trials offer both opportunities and challenges for developing personalized rules for assigning treatment for a given disease or ailment. For example, trials examining treatments for major depressive disorder are not only collecting typical baseline data such as age, gender, or scores on various tests, but also data that measure the structure and function of the brain such as images from magnetic resonance imaging (MRI), functional MRI (fMRI), or electroencephalography (EEG). These latter types of data have an inherent structure and may be considered as functional data. We propose an approach that uses baseline covariates, both scalars and functions, to aid in the selection of an optimal treatment. In addition to providing information on which treatment should be selected for a new patient, the estimated regime has the potential to provide insight into the relationship between treatment response and the set of baseline covariates. Our approach can be viewed as an extension of "advantage learning" to include both scalar and functional covariates. We describe our method and how to implement it using existing software. Empirical performance of our method is evaluated with simulated data in a variety of settings and also applied to data arising from a study of patients with major depressive disorder from whom baseline scalar covariates as well as functional data from EEG are available.

  1. Human oscillatory activity in near-miss events.

    PubMed

    Alicart, Helena; Cucurell, David; Mas-Herrero, Ernest; Marco-Pallarés, Josep

    2015-10-01

    Near-miss events are situations in which an action yields a negative result but is very close to being successful. They are known to influence behavior, especially in gambling scenarios. Previous neuroimaging studies have described an 'anomalous' activity of brain reward areas following these events. The goal of the present research was to study electrophysiological correlates of near-misses in the expectation and outcome phases. Electroencephalography was recorded while participants were playing a simplified version of a slot machine. Four possible outcomes (gain, near-miss, loss and no-information) were presented in a pseudorandom order to ensure fixed proportions. Results from the time-frequency analysis for the theta (4-8 Hz), alpha (9-13 Hz), low beta (15-22 Hz) and beta-gamma (25-35 Hz) frequency-bands presented larger power increases for wins and near-misses compared with losses. In the anticipation phase, power changes were lower than in the resolution phase. The current results are in agreement with previous studies showing that near-miss events recruit brain areas of the reward network. Likewise, the oscillatory activity in near-misses is very similar to the one elicited in the gain condition. In addition, present findings suggest that oscillatory activity in the expectation phase does not play a crucial role in near-miss events.

  2. Kettlewell's Missing Evidence.

    ERIC Educational Resources Information Center

    Allchin, Douglas Kellogg

    2002-01-01

    The standard textbook account of Kettlewell and the peppered moths omits significant information. Suggests that this case can be used to reflect on the role of simplification in science teaching. (Author/MM)

  3. Computation of the factorized error covariance of the difference between correlated estimators

    NASA Technical Reports Server (NTRS)

    Wolff, Peter J.; Mohan, Srinivas N.; Stienon, Francis M.; Bierman, Gerald J.

    1990-01-01

    A state estimation problem where some of the measurements may be common to two or more data sets is considered. Two approaches for computing the error covariance of the difference between filtered estimates (for each data set) are discussed. The first algorithm is based on postprocessing of the Kalman gain profiles of two correlated estimators. It uses UD factors of the covariance of the relative error. The second algorithm uses a square root information filter applied to relative error analysis. In the absence of process noise, the square root information filter is computationally more efficient and more flexible than the Kalman gain (covariance update) method. Both the algorithms (covariance and information matrix based) are applied to a Venus orbiter simulation, and their performances are compared.

  4. Semiparametric Estimation of Treatment Effect with Time-Lagged Response in the Presence of Informative Censoring

    PubMed Central

    Lu, Xiaomin; Tsiatis, Anastasios A.

    2011-01-01

    In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates. PMID:21706378

  5. Semiparametric estimation of treatment effect with time-lagged response in the presence of informative censoring.

    PubMed

    Lu, Xiaomin; Tsiatis, Anastasios A

    2011-10-01

    In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient's response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates. PMID:21706378

  6. A Comet's Missing Light

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-05-01

    On 28 November 2013, comet C/2012 S1 better known as comet ISON should have passed within two solar radii of the Suns surface as it reached perihelion in its orbit. But instead of shining in extreme ultraviolet (EUV) wavelengths as it grazed the solar surface, the comet was never detected by EUV instruments. What happened to comet ISON?Missing EmissionWhen a sungrazing comet passes through the solar corona, it leaves behind a trail of molecules evaporated from its surface. Some of these molecules emit EUV light, which can be detected by instruments on telescopes like the space-based Solar Dynamics Observatory (SDO).Comet ISON, a comet that arrived from deep space and was predicted to graze the Suns corona in November 2013, was expected to cause EUV emission during its close passage. But analysis of the data from multiple telescopes that tracked ISON in EUV including SDO reveals no sign of it at perihelion.In a recent study, Paul Bryans and DeanPesnell, scientists from NCARs High Altitude Observatory and NASA Goddard Space Flight Center, try to determine why ISON didnt display this expected emission.Comparing ISON and LovejoyIn December 2011, another comet dipped into the Suns corona: comet Lovejoy. This image, showingthe orbit Lovejoy took around the Sun, is a composite of SDO images of the pre- and post-perihelion phases of the orbit. Click for a closer look! The dashed part of the curve represents where Lovejoy passed out of view behind the Sun. [Bryans Pesnell 2016]This is not the first time weve watched a sungrazing comet with EUV-detecting telescopes: Comet Lovejoy passed similarly close to the Sun in December 2011. But when Lovejoy grazed the solar corona, it emitted brightly in EUV. So why didnt ISON? Bryans and Pesnell argue that there are two possibilities:the coronal conditions experienced by the two comets were not similar, orthe two comets themselves were not similar.To establish which factor is the most relevant, the authors first demonstrate that both

  7. Epigenetic Contribution to Covariance Between Relatives

    PubMed Central

    Tal, Omri; Kisdi, Eva; Jablonka, Eva

    2010-01-01

    Recent research has pointed to the ubiquity and abundance of between-generation epigenetic inheritance. This research has implications for assessing disease risk and the responses to ecological stresses and also for understanding evolutionary dynamics. An important step toward a general evaluation of these implications is the identification and estimation of the amount of heritable, epigenetic variation in populations. While methods for modeling the phenotypic heritable variance contributed by culture have already been developed, there are no comparable methods for nonbehavioral epigenetic inheritance systems. By introducing a model that takes epigenetic transmissibility (the probability of transmission of ancestral phenotypes) and environmental induction into account, we provide novel expressions for covariances between relatives. We have combined a classical quantitative genetics approach with information about the number of opportunities for epigenetic reset between generations and assumptions about environmental induction to estimate the heritable epigenetic variance and epigenetic transmissibility for both asexual and sexual populations. This assists us in the identification of phenotypes and populations in which epigenetic transmission occurs and enables a preliminary quantification of their transmissibility, which could then be followed by genomewide association and QTL studies. PMID:20100941

  8. Missing gene identification using functional coherence scores

    PubMed Central

    Chitale, Meghana; Khan, Ishita K.; Kihara, Daisuke

    2016-01-01

    Reconstructing metabolic and signaling pathways is an effective way of interpreting a genome sequence. A challenge in a pathway reconstruction is that often genes in a pathway cannot be easily found, reflecting current imperfect information of the target organism. In this work, we developed a new method for finding missing genes, which integrates multiple features, including gene expression, phylogenetic profile, and function association scores. Particularly, for considering function association between candidate genes and neighboring proteins to the target missing gene in the network, we used Co-occurrence Association Score (CAS) and PubMed Association Score (PAS), which are designed for capturing functional coherence of proteins. We showed that adding CAS and PAS substantially improve the accuracy of identifying missing genes in the yeast enzyme-enzyme network compared to the cases when only the conventional features, gene expression, phylogenetic profile, were used. Finally, it was also demonstrated that the accuracy improves by considering indirect neighbors to the target enzyme position in the network using a proper network-topology-based weighting scheme. PMID:27552989

  9. Determination of Resonance Parameters and their Covariances from Neutron Induced Reaction Cross Section Data

    SciTech Connect

    Schillebeeckx, P.; Becker, B.; Danon, Y.; Guber, K.; Harada, H.; Heyse, J.; Junghans, A.R.; Kopecky, S.; Massimi, C.; Moxon, M.C.; Otuka, N.; Sirakov, I.; Volev, K.

    2012-12-15

    Cross section data in the resolved and unresolved resonance region are represented by nuclear reaction formalisms using parameters which are determined by fitting them to experimental data. Therefore, the quality of evaluated cross sections in the resonance region strongly depends on the experimental data used in the adjustment process and an assessment of the experimental covariance data is of primary importance in determining the accuracy of evaluated cross section data. In this contribution, uncertainty components of experimental observables resulting from total and reaction cross section experiments are quantified by identifying the metrological parameters involved in the measurement, data reduction and analysis process. In addition, different methods that can be applied to propagate the covariance of the experimental observables (i.e. transmission and reaction yields) to the covariance of the resonance parameters are discussed and compared. The methods being discussed are: conventional uncertainty propagation, Monte Carlo sampling and marginalization. It is demonstrated that the final covariance matrix of the resonance parameters not only strongly depends on the type of experimental observables used in the adjustment process, the experimental conditions and the characteristics of the resonance structure, but also on the method that is used to propagate the covariances. Finally, a special data reduction concept and format is presented, which offers the possibility to store the full covariance information of experimental data in the EXFOR library and provides the information required to perform a full covariance evaluation.

  10. Covariance Spectroscopy for Fissile Material Detection

    SciTech Connect

    Rusty Trainham, Jim Tinsley, Paul Hurley, Ray Keegan

    2009-06-02

    Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem.

  11. Estimating missing tensor data by face synthesis for expression recognition

    NASA Astrophysics Data System (ADS)

    Tan, Huachun; Chen, Hao; Zhang, Jie

    2009-01-01

    In this paper, a new method of facial expression recognition is proposed for missing tensor data. In this method, the missing tensor data is estimated by facial expression synthesis in order to construct the full tensor, which is used for multi-factorization face analysis. The full tensor data allows for the full use of the information of a given database, and hence improves the performance of face analysis. Compared with EM algorithm for missing data estimation, the proposed method avoids iteration process and reduces the estimation complexity. The proposed missing tensor data estimation is applied for expression recognition. The experimental results show that the proposed method is performing better than only utilize the original smaller tensor.

  12. Missing Teeth Predict Incident Cardiovascular Events, Diabetes, and Death.

    PubMed

    Liljestrand, J M; Havulinna, A S; Paju, S; Männistö, S; Salomaa, V; Pussinen, P J

    2015-08-01

    Periodontitis, the main cause of tooth loss in the middle-aged and elderly, associates with the risk of atherosclerotic vascular disease. The objective was to study the capability of the number of missing teeth in predicting incident cardiovascular diseases (CVDs), diabetes, and all-cause death. The National FINRISK 1997 Study is a Finnish population-based survey of 8,446 subjects with 13 y of follow-up. Dental status was recorded at baseline in a clinical examination by a trained nurse, and information on incident CVD events, diabetes, and death was obtained via national registers. The registered CVD events included coronary heart disease events, acute myocardial infarction, and stroke. In Cox regression analyses, having ≥5 teeth missing was associated with 60% to 140% increased hazard for incident coronary heart disease events (P < 0.020) and acute myocardial infarction (P < 0.010). Incident CVD (P < 0.043), diabetes (P < 0.040), and death of any cause (P < 0.019) were associated with ≥9 missing teeth. No association with stroke was observed. Adding information on missing teeth to established risk factors improved risk discrimination of death (P = 0.0128) and provided a statistically significant net reclassification improvement for all studied end points. Even a few missing teeth may indicate an increased risk of CVD, diabetes, or all-cause mortality. When individual risk factors for chronic diseases are assessed, the number of missing teeth could be a useful additional indicator for general medical practitioners.

  13. Phase-covariant quantum cloning of qudits

    SciTech Connect

    Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin

    2003-02-01

    We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation.

  14. Covariate analysis of bivariate survival data

    SciTech Connect

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  15. Covariant action for type IIB supergravity

    NASA Astrophysics Data System (ADS)

    Sen, Ashoke

    2016-07-01

    Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.

  16. Noncommutative Gauge Theory with Covariant Star Product

    SciTech Connect

    Zet, G.

    2010-08-04

    We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.

  17. Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency

    PubMed Central

    2013-01-01

    Background The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by ‘semi-landmarks’ alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. Results We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure (‘ghost points’) can then be used in any further downstream statistical analysis. Conclusions Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points. PMID:23548043

  18. Some thoughts on positive definiteness in the consideration of nuclear data covariance matrices

    SciTech Connect

    Geraldo, L.P.; Smith, D.L.

    1988-01-01

    Some basic mathematical features of covariance matrices are reviewed, particularly as they relate to the property of positive difiniteness. Physical implications of positive definiteness are also discussed. Consideration is given to an examination of the origins of non-positive definite matrices, to procedures which encourage the generation of positive definite matrices and to the testing of covariance matrices for positive definiteness. Attention is also given to certain problems associated with the construction of covariance matrices using information which is obtained from evaluated data files recorded in the ENDF format. Examples are provided to illustrate key points pertaining to each of the topic areas covered.

  19. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    NASA Astrophysics Data System (ADS)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  20. Lorentz covariance of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Rovelli, Carlo; Speziale, Simone

    2011-05-01

    The kinematics of loop gravity can be given a manifestly Lorentz-covariant formulation: the conventional SU(2)-spin-network Hilbert space can be mapped to a space K of SL(2,C) functions, where Lorentz covariance is manifest. K can be described in terms of a certain subset of the projected spin networks studied by Livine, Alexandrov and Dupuis. It is formed by SL(2,C) functions completely determined by their restriction on SU(2). These are square-integrable in the SU(2) scalar product, but not in the SL(2,C) one. Thus, SU(2)-spin-network states can be represented by Lorentz-covariant SL(2,C) functions, as two-component photons can be described in the Lorentz-covariant Gupta-Bleuler formalism. As shown by Wolfgang Wieland in a related paper, this manifestly Lorentz-covariant formulation can also be directly obtained from canonical quantization. We show that the spinfoam dynamics of loop quantum gravity is locally SL(2,C)-invariant in the bulk, and yields states that are precisely in K on the boundary. This clarifies how the SL(2,C) spinfoam formalism yields an SU(2) theory on the boundary. These structures define a tidy Lorentz-covariant formalism for loop gravity.

  1. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  2. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  3. Cross-Section Covariance Data Processing with the AMPX Module PUFF-IV

    SciTech Connect

    Wiarda, Dorothea; Leal, Luiz C; Dunn, Michael E

    2011-01-01

    The ENDF community is endeavoring to release an updated version of the ENDF/B-VII library (ENDF/B-VII.1). In the new release several new evaluations containing covariance information have been added, as the community strives to add covariance information for use in programs like the TSUNAMI (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation) sequence of SCALE (Ref 1). The ENDF/B formatted files are processed into libraries to be used in transport calculations using the AMPX code system (Ref 2) or the NJOY code system (Ref 3). Both codes contain modules to process covariance matrices: PUFF-IV for AMPX and ERRORR in the case of NJOY. While the cross section processing capability between the two code systems has been widely compared, the same is not true for the covariance processing. This paper compares the results for the two codes using the pre-release version of ENDF/B-VII.1.

  4. Implementation of optimal phase-covariant cloning machines

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2007-07-15

    The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarization encoded qubits are discussed.

  5. Estimated Environmental Exposures for MISSE-3 and MISSE-4

    NASA Technical Reports Server (NTRS)

    Pippin, Gary; Normand, Eugene; Finckenor, Miria

    2008-01-01

    Both modeling techniques and a variety of measurements and observations were used to characterize the environmental conditions experienced by the specimens flown on the MISSE-3 (Materials International Space Station Experiment) and MISSE-4 space flight experiments. On August 3, 2006, astronauts Jeff Williams and Thomas Reiter attached MISSE-3 and -4 to the Quest airlock on ISS, where these experiments were exposed to atomic oxygen (AO), ultraviolet (UV) radiation, particulate radiation, thermal cycling, meteoroid/space debris impact, and the induced environment of an active space station. They had been flown to ISS during the July 2006 STS-121 mission. The two suitcases were oriented so that one side faced the ram direction and one side remained shielded from the atomic oxygen. On August 18,2007, astronauts Clay Anderson and Dave Williams retrieved MISSE-3 and-4 and returned them to Earth at the end of the STS-118 mission. Quantitative values are provided when possible for selected environmental factors. A meteoroid/debris impact survey was performed prior to de-integration at Langley Research Center. AO fluences were calculated based on mass loss and thickness loss of thin polymeric films of known AO reactivity. Radiation was measured with thermoluminescent detectors. Visual inspections under ambient and "black-light" at NASA LaRC, together with optical measurements on selected specimens, were the basis for the initial contamination level assessment.

  6. Covariation in the human masticatory apparatus.

    PubMed

    Noback, Marlijn L; Harvati, Katerina

    2015-01-01

    Many studies have described shape variation of the modern human cranium in relation to subsistence; however, patterns of covariation within the masticatory apparatus (MA) remain largely unexplored. The patterns and intensity of shape covariation, and how this is related to diet, are essential for understanding the evolution of functional masticatory adaptations of the human cranium. Within a worldwide sample (n = 255) of 15 populations with different modes of subsistence, we use partial least squares analysis to study the relationships between three components of the MA: upper dental arch, masseter muscle, and temporalis muscle attachments. We show that the shape of the masseter muscle and the shape of the temporalis muscle clearly covary with one another, but that the shape of the dental arch seems to be rather independent of the masticatory muscles. On the contrary, when relative positioning, orientation, and size of the masticatory components is included in the analysis, the dental arch shows the highest covariation with the other cranial parts, indicating that these additional factors are more important than just shape with regard to covariation within the MA. Covariation patterns among these cranial regions differ mainly between hunting-fishing and gathering-agriculture groups, possibly relating to greater masticatory strains resulting from a large meat component in the diet. High-strain groups show stronger covariation between upper dental arch and masticatory muscle shape when compared with low-strain groups. These results help to provide a clearer understanding of constraints and interlinkage of shape variation within the human MA and allow for more realistic modeling and predictions in future biomechanical studies.

  7. Construction and use of gene expression covariation matrix

    PubMed Central

    Hennetin, Jérôme; Pehkonen, Petri; Bellis, Michel

    2009-01-01

    . Conclusion This new method, applied to four different large data sets, has allowed us to construct distinct covariation matrices with similar properties. We have also developed a technique to translate these covariation networks into graphical 3D representations and found that the local assignation of the probe sets was conserved across the four chip set models used which encompass three different species (humans, mice, and rats). The application of adapted clustering methods succeeded in delineating six conserved functional regions that we characterized using Gene Ontology information. PMID:19594909

  8. Eddy Covariance Method: Overview of General Guidelines and Conventional Workflow

    NASA Astrophysics Data System (ADS)

    Burba, G. G.; Anderson, D. J.; Amen, J. L.

    2007-12-01

    received from new users of the Eddy Covariance method and relevant instrumentation, and employs non-technical language to be of practical use to those new to this field. Information is provided on theory of the method (including state of methodology, basic derivations, practical formulations, major assumptions and sources of errors, error treatment, and use in non- traditional terrains), practical workflow (e.g., experimental design, implementation, data processing, and quality control), alternative methods and applications, and the most frequently overlooked details of the measurements. References and access to an extended 141-page Eddy Covariance Guideline in three electronic formats are also provided.

  9. Variable selection for covariate-adjusted semiparametric inference in randomized clinical trials

    PubMed Central

    Yuan, Shuai; Zhang, Hao Helen; Davidian, Marie

    2013-01-01

    Extensive baseline covariate information is routinely collected on participants in randomized clinical trials, and it is well-recognized that a proper covariate-adjusted analysis can improve the efficiency of inference on the treatment effect. However, such covariate adjustment has engendered considerable controversy, as post hoc selection of covariates may involve subjectivity and lead to biased inference, while prior specification of the adjustment may exclude important variables from consideration. Accordingly, how to select covariates objectively to gain maximal efficiency is of broad interest. We propose and study the use of modern variable selection methods for this purpose in the context of a semiparametric framework, under which variable selection in modeling the relationship between outcome and covariates is separated from estimation of the treatment effect, circumventing the potential for selection bias associated with standard analysis of covariance methods. We demonstrate that such objective variable selection techniques combined with this framework can identify key variables and lead to unbiased and efficient inference on the treatment effect. A critical issue in finite samples is validity of estimators of uncertainty, such as standard errors and confidence intervals for the treatment effect. We propose an approach to estimation of sampling variation of estimated treatment effect and show its superior performance relative to that of existing methods. PMID:22733628

  10. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  11. Upper and lower covariance bounds for perturbed linear systems

    NASA Technical Reports Server (NTRS)

    Xu, J.-H.; Skelton, R. E.; Zhu, G.

    1990-01-01

    Both upper and lower bounds are established for state covariance matrices under parameter perturbations of the plant. The motivation for this study lies in the fact that many robustness properties of linear systems are given explicitly in terms of the state covariance matrix. Moreover, there exists a theory for control by covariance assignment. The results provide robustness properties of these covariance controllers.

  12. Defining habitat covariates in camera-trap based occupancy studies.

    PubMed

    Niedballa, Jürgen; Sollmann, Rahel; bin Mohamed, Azlan; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10-500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations.

  13. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  14. The Board's missing link.

    PubMed

    Montgomery, Cynthia A; Kaufman, Rhonda

    2003-03-01

    If a dam springs several leaks, there are various ways to respond. One could assiduously plug the holes, for instance. Or one could correct the underlying weaknesses, a more sensible approach. When it comes to corporate governance, for too long we have relied on the first approach. But the causes of many governance problems lie well below the surface--specifically, in critical relationships that are not structured to support the players involved. In other words, the very foundation of the system is flawed. And unless we correct the structural problems, surface changes are unlikely to have a lasting impact. When shareholders, management, and the board of directors work together as a system, they provide a powerful set of checks and balances. But the relationship between shareholders and directors is fraught with weaknesses, undermining the entire system's equilibrium. As the authors explain, the exchange of information between these two players is poor. Directors, though elected by shareholders to serve as their agents, aren't individually accountable to the investors. And shareholders--for a variety of reasons--have failed to exert much influence over boards. In the end, directors are left with the Herculean task of faithfully representing shareholders whose preferences are unclear, and shareholders have little say about who represents them and few mechanisms through which to create change. The authors suggest several ways to improve the relationship between shareholders and directors: Increase board accountability by recording individual directors' votes on key corporate resolutions; separate the positions of chairman and CEO; reinvigorate shareholders; and give boards funding to pay for outside experts who can provide perspective on crucial issues. PMID:12632807

  15. The Board's missing link.

    PubMed

    Montgomery, Cynthia A; Kaufman, Rhonda

    2003-03-01

    If a dam springs several leaks, there are various ways to respond. One could assiduously plug the holes, for instance. Or one could correct the underlying weaknesses, a more sensible approach. When it comes to corporate governance, for too long we have relied on the first approach. But the causes of many governance problems lie well below the surface--specifically, in critical relationships that are not structured to support the players involved. In other words, the very foundation of the system is flawed. And unless we correct the structural problems, surface changes are unlikely to have a lasting impact. When shareholders, management, and the board of directors work together as a system, they provide a powerful set of checks and balances. But the relationship between shareholders and directors is fraught with weaknesses, undermining the entire system's equilibrium. As the authors explain, the exchange of information between these two players is poor. Directors, though elected by shareholders to serve as their agents, aren't individually accountable to the investors. And shareholders--for a variety of reasons--have failed to exert much influence over boards. In the end, directors are left with the Herculean task of faithfully representing shareholders whose preferences are unclear, and shareholders have little say about who represents them and few mechanisms through which to create change. The authors suggest several ways to improve the relationship between shareholders and directors: Increase board accountability by recording individual directors' votes on key corporate resolutions; separate the positions of chairman and CEO; reinvigorate shareholders; and give boards funding to pay for outside experts who can provide perspective on crucial issues.

  16. Monitoring: The missing piece

    SciTech Connect

    Bjorkland, Ronald

    2013-11-15

    The U.S. National Environmental Policy Act (NEPA) of 1969 heralded in an era of more robust attention to environmental impacts resulting from larger scale federal projects. The number of other countries that have adopted NEPA's framework is evidence of the appeal of this type of environmental legislation. Mandates to review environmental impacts, identify alternatives, and provide mitigation plans before commencement of the project are at the heart of NEPA. Such project reviews have resulted in the development of a vast number of reports and large volumes of project-specific data that potentially can be used to better understand the components and processes of the natural environment and provide guidance for improved and efficient environmental protection. However, the environmental assessment (EA) or the more robust and intensive environmental impact statement (EIS) that are required for most major projects more frequently than not are developed to satisfy the procedural aspects of the NEPA legislation while they fail to provide the needed guidance for improved decision-making. While NEPA legislation recommends monitoring of project activities, this activity is not mandated, and in those situations where it has been incorporated, the monitoring showed that the EIS was inaccurate in direction and/or magnitude of the impact. Many reviews of NEPA have suggested that monitoring all project phases, from the design through the decommissioning, should be incorporated. Information gathered though a well-developed monitoring program can be managed in databases and benefit not only the specific project but would provide guidance how to better design and implement future activities designed to protect and enhance the natural environment. -- Highlights: • NEPA statutes created profound environmental protection legislative framework. • Contrary to intent, NEPA does not provide for definitive project monitoring. • Robust project monitoring is essential for enhanced

  17. Valid Monte Carlo permutation tests for genetic case-control studies with missing genotypes.

    PubMed

    Kinnamon, Daniel D; Martin, Eden R

    2014-05-01

    Monte Carlo permutation tests can be formally constructed by choosing a set of permutations of individual indices and a real-valued test statistic measuring the association between genotypes and affection status. In this paper, we develop a rigorous theoretical framework for verifying the validity of these tests when there are missing genotypes. We begin by specifying a nonparametric probability model for the observed genotype data in a genetic case-control study with unrelated subjects. Under this model and some minimal assumptions about the test statistic, we establish that the resulting Monte Carlo permutation test is exact level α if (1) the chosen set of permutations of individual indices is a group under composition and (2) the distribution of the observed genotype score matrix under the null hypothesis does not change if the assignment of individuals to rows is shuffled according to an arbitrary permutation in this set. We apply these conditions to show that frequently used Monte Carlo permutation tests based on the set of all permutations of individual indices are guaranteed to be exact level α only for missing data processes satisfying a rather restrictive additional assumption. However, if the missing data process depends on covariates that are all identified and recorded, we also show that Monte Carlo permutation tests based on the set of permutations within strata of individuals with identical covariate values are exact level α. Our theoretical results are verified and supplemented by simulations for a variety of missing data processes and test statistics.

  18. FAST NEUTRON COVARIANCES FOR EVALUATED DATA FILES.

    SciTech Connect

    HERMAN, M.; OBLOZINSKY, P.; ROCHMAN, D.; KAWANO, T.; LEAL, L.

    2006-06-05

    We describe implementation of the KALMAN code in the EMPIRE system and present first covariance data generated for Gd and Ir isotopes. A complete set of covariances, in the full energy range, was produced for the chain of 8 Gadolinium isotopes for total, elastic, capture, total inelastic (MT=4), (n,2n), (n,p) and (n,alpha) reactions. Our correlation matrices, based on combination of model calculations and experimental data, are characterized by positive mid-range and negative long-range correlations. They differ from the model-generated covariances that tend to show strong positive long-range correlations and those determined solely from experimental data that result in nearly diagonal matrices. We have studied shapes of correlation matrices obtained in the calculations and interpreted them in terms of the underlying reaction models. An important result of this study is the prediction of narrow energy ranges with extremely small uncertainties for certain reactions (e.g., total and elastic).

  19. Incorporating covariates in skewed functional data models.

    PubMed

    Li, Meng; Staicu, Ana-Maria; Bondell, Howard D

    2015-07-01

    We introduce a class of covariate-adjusted skewed functional models (cSFM) designed for functional data exhibiting location-dependent marginal distributions. We propose a semi-parametric copula model for the pointwise marginal distributions, which are allowed to depend on covariates, and the functional dependence, which is assumed covariate invariant. The proposed cSFM framework provides a unifying platform for pointwise quantile estimation and trajectory prediction. We consider a computationally feasible procedure that handles densely as well as sparsely observed functional data. The methods are examined numerically using simulations and is applied to a new tractography study of multiple sclerosis. Furthermore, the methodology is implemented in the R package cSFM, which is publicly available on CRAN.

  20. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1975-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  1. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1977-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  2. Choosing relatives for DNA identification of missing persons.

    PubMed

    Ge, Jianye; Budowle, Bruce; Chakraborty, Ranajit

    2011-01-01

    DNA-based analysis is integral to missing person identification cases. When direct references are not available, indirect relative references can be used to identify missing persons by kinship analysis. Generally, more reference relatives render greater accuracy of identification. However, it is costly to type multiple references. Thus, at times, decisions may need to be made on which relatives to type. In this study, pedigrees for 37 common reference scenarios with 13 CODIS STRs were simulated to rank the information content of different combinations of relatives. The results confirm that first-order relatives (parents and fullsibs) are the most preferred relatives to identify missing persons; fullsibs are also informative. Less genetic dependence between references provides a higher on average likelihood ratio. Distant relatives may not be helpful solely by autosomal markers. But lineage-based Y chromosome and mitochondrial DNA markers can increase the likelihood ratio or serve as filters to exclude putative relationships.

  3. Statistical analysis with missing exposure data measured by proxy respondents: a misclassification problem within a missing-data problem.

    PubMed

    Shardell, Michelle; Hicks, Gregory E

    2014-11-10

    In studies of older adults, researchers often recruit proxy respondents, such as relatives or caregivers, when study participants cannot provide self-reports (e.g., because of illness). Proxies are usually only sought to report on behalf of participants with missing self-reports; thus, either a participant self-report or proxy report, but not both, is available for each participant. Furthermore, the missing-data mechanism for participant self-reports is not identifiable and may be nonignorable. When exposures are binary and participant self-reports are conceptualized as the gold standard, substituting error-prone proxy reports for missing participant self-reports may produce biased estimates of outcome means. Researchers can handle this data structure by treating the problem as one of misclassification within the stratum of participants with missing self-reports. Most methods for addressing exposure misclassification require validation data, replicate data, or an assumption of nondifferential misclassification; other methods may result in an exposure misclassification model that is incompatible with the analysis model. We propose a model that makes none of the aforementioned requirements and still preserves model compatibility. Two user-specified tuning parameters encode the exposure misclassification model. Two proposed approaches estimate outcome means standardized for (potentially) high-dimensional covariates using multiple imputation followed by propensity score methods. The first method is parametric and uses maximum likelihood to estimate the exposure misclassification model (i.e., the imputation model) and the propensity score model (i.e., the analysis model); the second method is nonparametric and uses boosted classification and regression trees to estimate both models. We apply both methods to a study of elderly hip fracture patients.

  4. Covariant theory with a confined quantum

    SciTech Connect

    Noyes, H.P.; Pastrana, G.

    1983-06-01

    It has been shown by Lindesay, Noyes and Lindesay, and by Lindesay and Markevich that by using a simple unitary two particle driving term in covariant Faddeev equations a rich covariant and unitary three particle dynamics can be generated, including single quantum exchange and production. The basic observation on which this paper rests is that if the two particle input amplitudes used as driving terms in a three particle Faddeev equation are assumed to be simply bound state poles with no elastic scattering cut, they generate rearrangement collisions, but breakup is impossible.

  5. Parametric number covariance in quantum chaotic spectra.

    PubMed

    Vinayak; Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.

  6. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    PubMed

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity.

  7. Economical phase-covariant cloning of qudits

    SciTech Connect

    Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara

    2005-04-01

    We derive the optimal N{yields}M phase-covariant quantum cloning for equatorial states in dimension d with M=kd+N, k integer. The cloning maps are optimal for both global and single-qudit fidelity. The map is achieved by an 'economical' cloning machine, which works without ancilla.

  8. Conditional Covariance-Based Nonparametric Multidimensionality Assessment.

    ERIC Educational Resources Information Center

    Stout, William; And Others

    1996-01-01

    Three nonparametric procedures that use estimates of covariances of item-pair responses conditioned on examinee trait level for assessing dimensionality of a test are described. The HCA/CCPROX, DIMTEST, and DETECT are applied to a dimensionality study of the Law School Admission Test. (SLD)

  9. Hawking fluxes, back reaction and covariant anomalies

    NASA Astrophysics Data System (ADS)

    Kulkarni, Shailesh

    2008-11-01

    Starting from the chiral covariant effective action approach of Banerjee and Kulkarni (2008 Phys. Lett. B 659 827), we provide a derivation of the Hawking radiation from a charged black hole in the presence of gravitational back reaction. The modified expressions for charge and energy flux, due to the effect of one-loop back reaction are obtained.

  10. Rasch's Multiplicative Poisson Model with Covariates.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1996-01-01

    Rasch's multiplicative Poisson model is extended so that parameters for individuals in the prior gamma distribution have continuous covariates. Parameters for individuals are integrated out, and hyperparameters in the prior distribution are estimated by a numerical method separately from difficulty parameters that are treated as fixed parameters…

  11. Establishing a threshold for the number of missing days using 7 d pedometer data.

    PubMed

    Kang, Minsoo; Hart, Peter D; Kim, Youngdeok

    2012-11-01

    The purpose of this study was to examine the threshold of the number of missing days of recovery using the individual information (II)-centered approach. Data for this study came from 86 participants, aged from 17 to 79 years old, who had 7 consecutive days of complete pedometer (Yamax SW 200) wear. Missing datasets (1 d through 5 d missing) were created by a SAS random process 10,000 times each. All missing values were replaced using the II-centered approach. A 7 d average was calculated for each dataset, including the complete dataset. Repeated measure ANOVA was used to determine the differences between 1 d through 5 d missing datasets and the complete dataset. Mean absolute percentage error (MAPE) was also computed. Mean (SD) daily step count for the complete 7 d dataset was 7979 (3084). Mean (SD) values for the 1 d through 5 d missing datasets were 8072 (3218), 8066 (3109), 7968 (3273), 7741 (3050) and 8314 (3529), respectively (p > 0.05). The lower MAPEs were estimated for 1 d missing (5.2%, 95% confidence interval (CI) 4.4-6.0) and 2 d missing (8.4%, 95% CI 7.0-9.8), while all others were greater than 10%. The results of this study show that the 1 d through 5 d missing datasets, with replaced values, were not significantly different from the complete dataset. Based on the MAPE results, it is not recommended to replace more than two days of missing step counts.

  12. Methods for Mediation Analysis with Missing Data

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  13. Missing: Children and Young People with SEBD

    ERIC Educational Resources Information Center

    Visser, John; Daniels, Harry; Macnab, Natasha

    2005-01-01

    This article explores the issue of missing from and missing out on education. It argues that too little is known with regard to the characteristics of children and young people missing from schooling. It postulates that many of these pupils will have social, emotional and behavioural difficulties which are largely unrecognized and thus not…

  14. A Covariance NMR Toolbox for MATLAB and OCTAVE

    NASA Astrophysics Data System (ADS)

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.

  15. A covariance NMR toolbox for MATLAB and OCTAVE.

    PubMed

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. PMID:21215669

  16. Characteristics of HIV patients who missed their scheduled appointments

    PubMed Central

    Nagata, Delsa; Gutierrez, Eliana Battaggia

    2016-01-01

    ABSTRACT OBJECTIVE To analyze whether sociodemographic characteristics, consultations and care in special services are associated with scheduled infectious diseases appointments missed by people living with HIV. METHODS This cross-sectional and analytical study included 3,075 people living with HIV who had at least one scheduled appointment with an infectologist at a specialized health unit in 2007. A secondary data base from the Hospital Management & Information System was used. The outcome variable was missing a scheduled medical appointment. The independent variables were sex, age, appointments in specialized and available disciplines, hospitalizations at the Central Institute of the Clinical Hospital at the Faculdade de Medicina of the Universidade de São Paulo, antiretroviral treatment and change of infectologist. Crude and multiple association analysis were performed among the variables, with a statistical significance of p ≤ 0.05. RESULTS More than a third (38.9%) of the patients missed at least one of their scheduled infectious diseases appointments; 70.0% of the patients were male. The rate of missed appointments was 13.9%, albeit with no observed association between sex and absences. Age was inversely associated to missed appointment. Not undertaking anti-retroviral treatment, having unscheduled infectious diseases consultations or social services care and being hospitalized at the Central Institute were directly associated to missed appointments. CONCLUSIONS The Hospital Management & Information System proved to be a useful tool for developing indicators related to the quality of health care of people living with HIV. Other informational systems, which are often developed for administrative purposes, can also be useful for local and regional management and for evaluating the quality of care provided for patients living with HIV. PMID:26786472

  17. What's Missing? Anti-Racist Sex Education!

    ERIC Educational Resources Information Center

    Whitten, Amanda; Sethna, Christabelle

    2014-01-01

    Contemporary sexual health curricula in Canada include information about sexual diversity and queer identities, but what remains missing is any explicit discussion of anti-racist sex education. Although there exists federal and provincial support for multiculturalism and anti-racism in schools, contemporary Canadian sex education omits crucial…

  18. Missed Opportunities: But a New Century Is Starting.

    ERIC Educational Resources Information Center

    Corn, Anne L.

    1999-01-01

    This article describes critical events that have shaped gifted education, including: closing of one-room schoolhouses, the industrial revolution, the space race, the civil right movement, legislation for special education, growth in technology and information services, educational research, and advocacy. Missed opportunities and future…

  19. 40 CFR 98.315 - Procedures for estimating missing data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data. 98.315 Section 98.315 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... coke consumption based on all available process data or information used for accounting purposes...

  20. 40 CFR 98.315 - Procedures for estimating missing data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data. 98.315 Section 98.315 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... coke consumption based on all available process data or information used for accounting purposes...

  1. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  2. Missing Data and Multiple Imputation: An Unbiased Approach

    NASA Technical Reports Server (NTRS)

    Foy, M.; VanBaalen, M.; Wear, M.; Mendez, C.; Mason, S.; Meyers, V.; Alexander, D.; Law, J.

    2014-01-01

    The default method of dealing with missing data in statistical analyses is to only use the complete observations (complete case analysis), which can lead to unexpected bias when data do not meet the assumption of missing completely at random (MCAR). For the assumption of MCAR to be met, missingness cannot be related to either the observed or unobserved variables. A less stringent assumption, missing at random (MAR), requires that missingness not be associated with the value of the missing variable itself, but can be associated with the other observed variables. When data are truly MAR as opposed to MCAR, the default complete case analysis method can lead to biased results. There are statistical options available to adjust for data that are MAR, including multiple imputation (MI) which is consistent and efficient at estimating effects. Multiple imputation uses informing variables to determine statistical distributions for each piece of missing data. Then multiple datasets are created by randomly drawing on the distributions for each piece of missing data. Since MI is efficient, only a limited number, usually less than 20, of imputed datasets are required to get stable estimates. Each imputed dataset is analyzed using standard statistical techniques, and then results are combined to get overall estimates of effect. A simulation study will be demonstrated to show the results of using the default complete case analysis, and MI in a linear regression of MCAR and MAR simulated data. Further, MI was successfully applied to the association study of CO2 levels and headaches when initial analysis showed there may be an underlying association between missing CO2 levels and reported headaches. Through MI, we were able to show that there is a strong association between average CO2 levels and the risk of headaches. Each unit increase in CO2 (mmHg) resulted in a doubling in the odds of reported headaches.

  3. A Simulation Study of Missing Data with Multiple Missing X's

    ERIC Educational Resources Information Center

    Rubright, Jonathan D.; Nandakumar, Ratna; Glutting, Joseph J.

    2014-01-01

    When exploring missing data techniques in a realistic scenario, the current literature is limited: most studies only consider consequences with data missing on a single variable. This simulation study compares the relative bias of two commonly used missing data techniques when data are missing on more than one variable. Factors varied include type…

  4. Use of Prostaglandin E2 in the Management of Missed Abortion, Missed Labour, and Hydatidiform Mole

    PubMed Central

    Karim, S. M. M.

    1970-01-01

    Treatment of six cases of missed abortion and one case of hydatidiform mole with intravenous infusion of prostaglandin E2 resulted in complete abortion in all cases. Of 15 patients with missed labour, 14 were delivered successfully with similar treatment. The technique appears to be a safe, reliable, and rapid method of managing missed abortion, missed labour, and hydatidiform mole. PMID:5448780

  5. Nuclear Forensics Analysis with Missing and Uncertain Data

    SciTech Connect

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained by replacing missing information with constant values.

  6. The Effect of Missing Data Handling Methods on Goodness of Fit Indices in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Köse, Alper

    2014-01-01

    The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…

  7. A Primer for Handling Missing Values in the Analysis of Education and Training Data

    ERIC Educational Resources Information Center

    Gemici, Sinan; Bednarz, Alice; Lim, Patrick

    2012-01-01

    Quantitative research in vocational education and training (VET) is routinely affected by missing or incomplete information. However, the handling of missing data in published VET research is often sub-optimal, leading to a real risk of generating results that can range from being slightly biased to being plain wrong. Given that the growing…

  8. 20 CFR 364.4 - Placement of missing children posters in Board field offices.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... National Center for Missing and Exploited Children shall select the missing child and the pertinent information about that child, which may include a photograph of the child, that will appear on the poster. The Board will develop a standard format for these posters. (b) Transmission of posters to field...

  9. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    PubMed

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline. PMID:24400941

  10. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    PubMed

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline.

  11. Genomic Variants Revealed by Invariably Missing Genotypes in Nelore Cattle

    PubMed Central

    da Silva, Joaquim Manoel; Giachetto, Poliana Fernanda; da Silva, Luiz Otávio Campos; Cintra, Leandro Carrijo; Paiva, Samuel Rezende; Caetano, Alexandre Rodrigues; Yamagishi, Michel Eduardo Beleza

    2015-01-01

    High density genotyping panels have been used in a wide range of applications. From population genetics to genome-wide association studies, this technology still offers the lowest cost and the most consistent solution for generating SNP data. However, in spite of the application, part of the generated data is always discarded from final datasets based on quality control criteria used to remove unreliable markers. Some discarded data consists of markers that failed to generate genotypes, labeled as missing genotypes. A subset of missing genotypes that occur in the whole population under study may be caused by technical issues but can also be explained by the presence of genomic variations that are in the vicinity of the assayed SNP and that prevent genotyping probes from annealing. The latter case may contain relevant information because these missing genotypes might be used to identify population-specific genomic variants. In order to assess which case is more prevalent, we used Illumina HD Bovine chip genotypes from 1,709 Nelore (Bos indicus) samples. We found 3,200 missing genotypes among the whole population. NGS re-sequencing data from 8 sires were used to verify the presence of genomic variations within their flanking regions in 81.56% of these missing genotypes. Furthermore, we discovered 3,300 novel SNPs/Indels, 31% of which are located in genes that may affect traits of importance for the genetic improvement of cattle production. PMID:26305794

  12. Genomic Variants Revealed by Invariably Missing Genotypes in Nelore Cattle.

    PubMed

    da Silva, Joaquim Manoel; Giachetto, Poliana Fernanda; da Silva, Luiz Otávio Campos; Cintra, Leandro Carrijo; Paiva, Samuel Rezende; Caetano, Alexandre Rodrigues; Yamagishi, Michel Eduardo Beleza

    2015-01-01

    High density genotyping panels have been used in a wide range of applications. From population genetics to genome-wide association studies, this technology still offers the lowest cost and the most consistent solution for generating SNP data. However, in spite of the application, part of the generated data is always discarded from final datasets based on quality control criteria used to remove unreliable markers. Some discarded data consists of markers that failed to generate genotypes, labeled as missing genotypes. A subset of missing genotypes that occur in the whole population under study may be caused by technical issues but can also be explained by the presence of genomic variations that are in the vicinity of the assayed SNP and that prevent genotyping probes from annealing. The latter case may contain relevant information because these missing genotypes might be used to identify population-specific genomic variants. In order to assess which case is more prevalent, we used Illumina HD Bovine chip genotypes from 1,709 Nelore (Bos indicus) samples. We found 3,200 missing genotypes among the whole population. NGS re-sequencing data from 8 sires were used to verify the presence of genomic variations within their flanking regions in 81.56% of these missing genotypes. Furthermore, we discovered 3,300 novel SNPs/Indels, 31% of which are located in genes that may affect traits of importance for the genetic improvement of cattle production. PMID:26305794

  13. Construction of Covariance Functions with Variable Length Fields

    NASA Technical Reports Server (NTRS)

    Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven

    2005-01-01

    This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.

  14. Direct Neutron Capture Calculations with Covariant Density Functional Theory Inputs

    NASA Astrophysics Data System (ADS)

    Zhang, Shi-Sheng; Peng, Jin-Peng; Smith, Michael S.; Arbanas, Goran; Kozub, Ray L.

    2014-09-01

    Predictions of direct neutron capture are of vital importance for simulations of nucleosynthesis in supernovae, merging neutron stars, and other astrophysical environments. We calculate the direct capture cross sections for E1 transitions using nuclear structure information from a covariant density functional theory as input for the FRESCO coupled-channels reaction code. We find good agreement of our predictions with experimental cross section data on the double closed-shell targets 16O, 48Ca, and 90Zr, and the exotic nucleus 36S. Extensions of the technique for unstable nuclei and for large-scale calculations will be discussed. Predictions of direct neutron capture are of vital importance for simulations of nucleosynthesis in supernovae, merging neutron stars, and other astrophysical environments. We calculate the direct capture cross sections for E1 transitions using nuclear structure information from a covariant density functional theory as input for the FRESCO coupled-channels reaction code. We find good agreement of our predictions with experimental cross section data on the double closed-shell targets 16O, 48Ca, and 90Zr, and the exotic nucleus 36S. Extensions of the technique for unstable nuclei and for large-scale calculations will be discussed. Supported by the U.S. Dept. of Energy, Office of Nuclear Physics.

  15. A Covariance Generation Methodology for Fission Product Yields

    NASA Astrophysics Data System (ADS)

    Terranova, N.; Serot, O.; Archier, P.; Vallet, V.; De Saint Jean, C.; Sumini, M.

    2016-03-01

    Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1) no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM) implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation), developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.

  16. On covariance structure in noisy, big data

    NASA Astrophysics Data System (ADS)

    Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.

    2013-09-01

    Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.

  17. Covariance and the hierarchy of frame bundles

    NASA Technical Reports Server (NTRS)

    Estabrook, Frank B.

    1987-01-01

    This is an essay on the general concept of covariance, and its connection with the structure of the nested set of higher frame bundles over a differentiable manifold. Examples of covariant geometric objects include not only linear tensor fields, densities and forms, but affinity fields, sectors and sector forms, higher order frame fields, etc., often having nonlinear transformation rules and Lie derivatives. The intrinsic, or invariant, sets of forms that arise on frame bundles satisfy the graded Cartan-Maurer structure equations of an infinite Lie algebra. Reduction of these gives invariant structure equations for Lie pseudogroups, and for G-structures of various orders. Some new results are introduced for prolongation of structure equations, and for treatment of Riemannian geometry with higher-order moving frames. The use of invariant form equations for nonlinear field physics is implicitly advocated.

  18. Missing data in the exposure of interest and marginal structural models: a simulation study based on the Framingham Heart Study.

    PubMed

    Shortreed, Susan M; Forbes, Andrew B

    2010-02-20

    Missing data are common in longitudinal studies and can occur in the exposure interest. There has been little work assessing the impact of missing data in marginal structural models (MSMs), which are used to estimate the effect of an exposure history on an outcome when time-dependent confounding is present. We design a series of simulations based on the Framingham Heart Study data set to investigate the impact of missing data in the primary exposure of interest in a complex, realistic setting. We use a standard application of MSMs to estimate the causal odds ratio of a specific activity history on outcome. We report and discuss the results of four missing data methods, under seven possible missing data structures, including scenarios in which an unmeasured variable predicts missing information. In all missing data structures, we found that a complete case analysis, where all subjects with missing exposure data are removed from the analysis, provided the least bias. An analysis that censored individuals at the first occasion of missing exposure and includes a censorship model as well as a propensity model when creating the inverse probability weights also performed well. The presence of an unmeasured predictor of missing data only slightly increased bias, except in the situation such that the exposure had a large impact on missing data and the unmeasured variable had a large impact on missing data and outcome. A discussion of the results is provided using causal diagrams, showing the usefulness of drawing such diagrams before conducting an analysis. PMID:20025082

  19. Evaluated Nuclear Data Covariances: The Journey From ENDF/B-VII.0 to ENDF/B-VII.1

    NASA Astrophysics Data System (ADS)

    Smith, Donald L.

    2011-12-01

    Recent interest from data users on applications that utilize the uncertainties of evaluated nuclear reaction data has stimulated the data evaluation community to focus on producing covariance data to a far greater extent than ever before. Although some uncertainty information has been available in the ENDF/B libraries since the 1970's, this content has been fairly limited in scope, the quality quite variable, and the use of covariance data confined to only a few application areas. Today, covariance data are more widely and extensively utilized than ever before in neutron dosimetry, in advanced fission reactor design studies, in nuclear criticality safety assessments, in national security applications, and even in certain fusion energy applications. The main problem that now faces the ENDF/B evaluator community is that of providing covariances that are adequate both in quantity and quality to meet the requirements of contemporary nuclear data users in a timely manner. In broad terms, the approach pursued during the past several years has been to purge any legacy covariance information contained in ENDF/B-VI.8 that was judged to be subpar, to include in ENDF/B-VII.0 (released in 2006) only those covariance data deemed then to be of reasonable quality for contemporary applications, and to subsequently devote as much effort as the available time and resources allowed to producing additional covariance data of suitable scope and quality for inclusion in ENDF/B-VII.1. Considerable attention has also been devoted during the five years since the release of ENDF/B-VII.0 to examining and improving the methods used to produce covariance data from thermal energies up to the highest energies addressed in the ENDF/B library, to processing these data in a robust fashion so that they can be utilized readily in contemporary nuclear applications, and to developing convenient covariance data visualization capabilities. Other papers included in this issue discuss in considerable

  20. Covariant quantum mechanics applied to noncommutative geometry

    NASA Astrophysics Data System (ADS)

    Astuti, Valerio

    2015-08-01

    We here report a result obtained in collaboration with Giovanni Amelino-Camelia, first shown in the paper [1]. Applying the manifestly covariant formalism of quantum mechanics to the much studied Snyder spacetime [2] we show how it is trivial in every physical observables, this meaning that every measure in this spacetime gives the same results that would be obtained in the flat Minkowski spacetime.

  1. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  2. Generalized Covariance Analysis For Remote Estimators

    NASA Technical Reports Server (NTRS)

    Boone, Jack N.

    1991-01-01

    Technique developed to predict true covariance of stochastic process at remote location when control applied to process both by autonomous (local-estimator) control subsystem and remote (non-local-estimator) control subsystem. Intended orginally for design and evaluation of ground-based schemes for estimation of gyro parameters of Magellan spacecraft. Applications include variety of remote-control systems with and without delays. Potential terrestrial applications include navigation and control of industrial processes.

  3. Torsion and geometrostasis in covariant superstrings

    SciTech Connect

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.

  4. All covariance controllers for linear discrete-time systems

    NASA Technical Reports Server (NTRS)

    Hsieh, Chen; Skelton, Robert E.

    1990-01-01

    The set of covariances that a linear discrete-time plant with a specified-order controller can have is characterized. The controllers that assign such covariances to any linear discrete-time system are given explicitly in closed form. The freedom in these covariance controllers is explicit and is parameterized by two orthogonal matrices. By appropriately choosing these free parameters, additional system objectives can be achieved without altering the state covariance, and the stability of the closed-loop system is guaranteed.

  5. Shrinkage covariance matrix approach for microarray data

    NASA Astrophysics Data System (ADS)

    Karjanto, Suryaefiza; Aripin, Rasimah

    2013-04-01

    Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n < p. This leads to a biased estimate of the covariance matrix. In this study, the Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.

  6. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  7. ANALYSIS OF COVARIANCE WITH SPATIALLY CORRELATED SECONDARY VARIABLES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data sets which contain measurements on a spatially referenced response and covariate are analyzed using either co-kriging or spatial analysis of covariance. While co-kriging accounts for the correlation structure of the covariate, it is purely a predictive tool. Alternatively, spatial analysis of c...

  8. Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments

    ERIC Educational Resources Information Center

    Barker, Lynne A.; Andrade, Jackie

    2006-01-01

    In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…

  9. Earth Observation System Flight Dynamics System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  10. A new estimation with minimum trace of asymptotic covariance matrix for incomplete longitudinal data with a surrogate process

    PubMed Central

    Chen, Baojiang; Qin, Jing

    2013-01-01

    Missing data is a very common problem in medical and social studies, especially when data are collected longitudinally. It is a challenging problem to utilize observed data effectively. Many papers on missing data problems can be found in statistical literature. It is well known that the inverse weighted estimation is neither efficient nor robust. On the other hand, the doubly robust (DR) method can improve the efficiency and robustness. As is known, the DR estimation requires a missing data model (i.e., a model for the probability that data are observed) and a working regression model (i.e., a model for the outcome variable given covariates and surrogate variables). Because the DR estimating function has mean zero for any parameters in the working regression model when the missing data model is correctly specified, in this paper, we derive a formula for the estimator of the parameters of the working regression model that yields the optimally efficient estimator of the marginal mean model (the parameters of interest) when the missing data model is correctly specified. Furthermore, the proposed method also inherits the DR property. Simulation studies demonstrate the greater efficiency of the proposed method compared with the standard DR method. A longitudinal dementia data set is used for illustration. PMID:23744541

  11. Part Marking and Identification Materials on MISSE

    NASA Technical Reports Server (NTRS)

    Finckenor, Miria M.; Roxby, Donald L.

    2008-01-01

    Many different spacecraft materials were flown as part of the Materials on International Space Station Experiment (MISSE), including several materials used in part marking and identification. The experiment contained Data Matrix symbols applied using laser bonding, vacuum arc vapor deposition, gas assisted laser etch, chemical etch, mechanical dot peening, laser shot peening, and laser induced surface improvement. The effects of ultraviolet radiation on nickel acetate seal versus hot water seal on sulfuric acid anodized aluminum are discussed. These samples were exposed on the International Space Station to the low Earth orbital environment of atomic oxygen, ultraviolet radiation, thermal cycling, and hard vacuum, though atomic oxygen exposure was very limited for some samples. Results from the one-year exposure on MISSE-3 and MISSE-4 are compared to those from MISSE-1 and MISSE-2, which were exposed for four years. Part marking and identification materials on the current MISSE -6 experiment are also discussed.

  12. Are all biases missing data problems?

    PubMed Central

    Howe, Chanelle J.; Cain, Lauren E.; Hogan, Joseph W.

    2015-01-01

    Estimating causal effects is a frequent goal of epidemiologic studies. Traditionally, there have been three established systematic threats to consistent estimation of causal effects. These three threats are bias due to confounders, selection, and measurement error. Confounding, selection, and measurement bias have typically been characterized as distinct types of biases. However, each of these biases can also be characterized as missing data problems that can be addressed with missing data solutions. Here we describe how the aforementioned systematic threats arise from missing data as well as review methods and their related assumptions for reducing each bias type. We also link the assumptions made by the reviewed methods to the missing completely at random (MCAR) and missing at random (MAR) assumptions made in the missing data framework that allow for valid inferences to be made based on the observed, incomplete data. PMID:26576336

  13. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  14. Quantum energy inequalities and local covariance II: categorical formulation

    NASA Astrophysics Data System (ADS)

    Fewster, Christopher J.

    2007-11-01

    We formulate quantum energy inequalities (QEIs) in the framework of locally covariant quantum field theory developed by Brunetti, Fredenhagen and Verch, which is based on notions taken from category theory. This leads to a new viewpoint on the QEIs, and also to the identification of a new structural property of locally covariant quantum field theory, which we call local physical equivalence. Covariant formulations of the numerical range and spectrum of locally covariant fields are given and investigated, and a new algebra of fields is identified, in which fields are treated independently of their realisation on particular spacetimes and manifestly covariant versions of the functional calculus may be formulated.

  15. Learning through Feature Prediction: An Initial Investigation into Teaching Categories to Children with Autism through Predicting Missing Features

    ERIC Educational Resources Information Center

    Sweller, Naomi

    2015-01-01

    Individuals with autism have difficulty generalising information from one situation to another, a process that requires the learning of categories and concepts. Category information may be learned through: (1) classifying items into categories, or (2) predicting missing features of category items. Predicting missing features has to this point been…

  16. PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices

    SciTech Connect

    Dunn, M.E.

    2000-06-01

    PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI.

  17. Evolutionary Characteristics of Missing Proteins: Insights into the Evolution of Human Chromosomes Related to Missing-Protein-Encoding Genes.

    PubMed

    Xu, Aishi; Li, Guang; Yang, Dong; Wu, Songfeng; Ouyang, Hongsheng; Xu, Ping; He, Fuchu

    2015-12-01

    Although the "missing protein" is a temporary concept in C-HPP, the biological information for their "missing" could be an important clue in evolutionary studies. Here we classified missing-protein-encoding genes into two groups, the genes encoding PE2 proteins (with transcript evidence) and the genes encoding PE3/4 proteins (with no transcript evidence). These missing-protein-encoding genes distribute unevenly among different chromosomes, chromosomal regions, or gene clusters. In the view of evolutionary features, PE3/4 genes tend to be young, spreading at the nonhomology chromosomal regions and evolving at higher rates. Interestingly, there is a higher proportion of singletons in PE3/4 genes than the proportion of singletons in all genes (background) and OTCSGs (organ, tissue, cell type-specific genes). More importantly, most of the paralogous PE3/4 genes belong to the newly duplicated members of the paralogous gene groups, which mainly contribute to special biological functions, such as "smell perception". These functions are heavily restricted into specific type of cells, tissues, or specific developmental stages, acting as the new functional requirements that facilitated the emergence of the missing-protein-encoding genes during evolution. In addition, the criteria for the extremely special physical-chemical proteins were first set up based on the properties of PE2 proteins, and the evolutionary characteristics of those proteins were explored. Overall, the evolutionary analyses of missing-protein-encoding genes are expected to be highly instructive for proteomics and functional studies in the future.

  18. Coupled nucleotide covariations reveal dynamic RNA interaction patterns.

    PubMed Central

    Gultyaev, A P; Franch, T; Gerdes, K

    2000-01-01

    Evolutionarily conserved structures in related RNA molecules contain coordinated variations (covariations) of paired nucleotides. Analysis of covariations is a very powerful approach to deduce phylogenetically conserved (i.e., functional) conformations, including tertiary interactions. Here we discuss conserved RNA folding pathways that are revealed by covariation patterns. In such pathways, structural requirements for alternative pairings cause some nucleotides to covary with two different partners. Such "coupled" covariations between three or more nucleotides were found in various types of RNAs. The analysis of coupled covariations can unravel important features of RNA folding dynamics and improve phylogeny reconstruction in some cases. Importantly, it is necessary to distinguish between multiple covariations determined by mutually exclusive structures and those determined by tertiary contacts. PMID:11105748

  19. Application of "AIC" to Wald and Lagrange Multiplier Tests in Covariance Structure Analysis.

    ERIC Educational Resources Information Center

    Chou, Chih-Ping; Bentler, P. M.

    1996-01-01

    Some efficient procedures are proposed for using the Akaike Information Criterion (H. Akaike, 1987), an alternative to the conventional chi-square goodness of fit test, in covariance structure analysis based on backward search through the Wald test to impose constraints and forward search through the Lagrange test to release constraints. (SLD)

  20. Realization of a universal and phase-covariant quantum cloning machine in separate cavities

    SciTech Connect

    Fang Baolong; Song Qingming; Ye Liu

    2011-04-15

    We present a scheme to realize a special quantum cloning machine in separate cavities. The quantum cloning machine can copy the quantum information from a photon pulse to two distant atoms. Choosing the different parameters, the method can perform optimal symmetric (asymmetric) universal quantum cloning and optimal symmetric (asymmetric) phase-covariant cloning.

  1. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  2. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  3. Covariant constraints in ghost free massive gravity

    SciTech Connect

    Deffayet, C.; Mourad, J.; Zahariade, G. E-mail: mourad@apc.univ-paris7.fr

    2013-01-01

    We show that the reformulation of the de Rham-Gabadadze-Tolley massive gravity theory using vielbeins leads to a very simple and covariant way to count constraints, and hence degrees of freedom. Our method singles out a subset of theories, in the de Rham-Gabadadze-Tolley family, where an extra constraint, needed to eliminate the Boulware Deser ghost, is easily seen to appear. As a side result, we also introduce a new method, different from the Stuckelberg trick, to extract kinetic terms for the polarizations propagating in addition to those of the massless graviton.

  4. Covariant harmonic oscillators and coupled harmonic oscillators

    NASA Technical Reports Server (NTRS)

    Han, Daesoo; Kim, Young S.; Noz, Marilyn E.

    1995-01-01

    It is shown that the system of two coupled harmonic oscillators shares the basic symmetry properties with the covariant harmonic oscillator formalism which provides a concise description of the basic features of relativistic hadronic features observed in high-energy laboratories. It is shown also that the coupled oscillator system has the SL(4,r) symmetry in classical mechanics, while the present formulation of quantum mechanics can accommodate only the Sp(4,r) portion of the SL(4,r) symmetry. The possible role of the SL(4,r) symmetry in quantum mechanics is discussed.

  5. Boost covariant gluon distributions in large nuclei

    NASA Astrophysics Data System (ADS)

    McLerran, Larry; Venugopalan, Raju

    1998-04-01

    It has been shown recently that there exist analytical solutions of the Yang-Mills equations for non-Abelian Weizsäcker-Williams fields which describe the distribution of gluons in large nuclei at small x. These solutions however depend on the color charge distribution at large rapidities. We here construct a model of the color charge distribution of partons in the fragmentation region and use it to compute the boost covariant momentum distributions of wee gluons. The phenomenological applications of our results are discussed.

  6. Cosmology of a covariant Galilean field.

    PubMed

    De Felice, Antonio; Tsujikawa, Shinji

    2010-09-10

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  7. Using Principal Components as Auxiliary Variables in Missing Data Estimation.

    PubMed

    Howard, Waylon J; Rhemtulla, Mijke; Little, Todd D

    2015-01-01

    To deal with missing data that arise due to participant nonresponse or attrition, methodologists have recommended an "inclusive" strategy where a large set of auxiliary variables are used to inform the missing data process. In practice, the set of possible auxiliary variables is often too large. We propose using principal components analysis (PCA) to reduce the number of possible auxiliary variables to a manageable number. A series of Monte Carlo simulations compared the performance of the inclusive strategy with eight auxiliary variables (inclusive approach) to the PCA strategy using just one principal component derived from the eight original variables (PCA approach). We examined the influence of four independent variables: magnitude of correlations, rate of missing data, missing data mechanism, and sample size on parameter bias, root mean squared error, and confidence interval coverage. Results indicate that the PCA approach results in unbiased parameter estimates and potentially more accuracy than the inclusive approach. We conclude that using the PCA strategy to reduce the number of auxiliary variables is an effective and practical way to reap the benefits of the inclusive strategy in the presence of many possible auxiliary variables.

  8. Annual Coded Wire Program Missing Production Groups, 1996 Annual Report.

    SciTech Connect

    Pastor, S.M.

    1997-07-01

    In 1989 the Bonneville Power Administration (BPA) began funding the evaluation of production groups of juvenile anadromous fish not being coded-wire tagged for other programs. These groups were the ``Missing Production Groups``. Production fish released by the US Fish and Wildlife Service (USFWS) without representative coded-wire tags during the 1980`s are indicated as blank spaces on the survival graphs in this report. The objectives of the ``Missing Production Groups`` program are: to estimate the total survival of each production group, to estimate the contribution of each production group to various fisheries, and to prepare an annual report for all USFWS hatcheries in the Columbia River basin. Coded-wire tag recovery information will be used to evaluate the relative success of individual brood stocks. This information can also be used by salmon harvest managers to develop plans to allow the harvest of excess hatchery fish while protecting threatened, endangered, or other stocks of concern.

  9. Noisy covariance matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, S.; Kondor, I.

    2002-05-01

    According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.

  10. Covariant constitutive relations and relativistic inhomogeneous plasmas

    SciTech Connect

    Gratus, J.; Tucker, R. W.

    2011-04-15

    The notion of a 2-point susceptibility kernel used to describe linear electromagnetic responses of dispersive continuous media in nonrelativistic phenomena is generalized to accommodate the constraints required of a causal formulation in spacetimes with background gravitational fields. In particular the concepts of spatial material inhomogeneity and temporal nonstationarity are formulated within a fully covariant spacetime framework. This framework is illustrated by recasting the Maxwell-Vlasov equations for a collisionless plasma in a form that exposes a 2-point electromagnetic susceptibility kernel in spacetime. This permits the establishment of a perturbative scheme for nonstationary inhomogeneous plasma configurations. Explicit formulae for the perturbed kernel are derived in both the presence and absence of gravitation using the general solution to the relativistic equations of motion of the plasma constituents. In the absence of gravitation this permits an analysis of collisionless damping in terms of a system of integral equations that reduce to standard Landau damping of Langmuir modes when the perturbation refers to a homogeneous stationary plasma configuration. It is concluded that constitutive modeling in terms of a 2-point susceptibility kernel in a covariant spacetime framework offers a natural extension of standard nonrelativistic descriptions of simple media and that its use for describing linear responses of more general dispersive media has wide applicability in relativistic plasma modeling.

  11. Methods for Handling Missing Secondary Respondent Data

    ERIC Educational Resources Information Center

    Young, Rebekah; Johnson, David

    2013-01-01

    Secondary respondent data are underutilized because researchers avoid using these data in the presence of substantial missing data. The authors reviewed, evaluated, and tested solutions to this problem. Five strategies of dealing with missing partner data were reviewed: (a) complete case analysis, (b) inverse probability weighting, (c) correction…

  12. Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: a principled yet flexible approach.

    PubMed

    Tsiatis, Anastasios A; Davidian, Marie; Zhang, Min; Lu, Xiaomin

    2008-10-15

    There is considerable debate regarding whether and how covariate-adjusted analyses should be used in the comparison of treatments in randomized clinical trials. Substantial baseline covariate information is routinely collected in such trials, and one goal of adjustment is to exploit covariates associated with outcome to increase precision of estimation of the treatment effect. However, concerns are routinely raised over the potential for bias when the covariates used are selected post hoc and the potential for adjustment based on a model of the relationship between outcome, covariates, and treatment to invite a 'fishing expedition' for that leading to the most dramatic effect estimate. By appealing to the theory of semiparametrics, we are led naturally to a characterization of all treatment effect estimators and to principled, practically feasible methods for covariate adjustment that yield the desired gains in efficiency and that allow covariate relationships to be identified and exploited while circumventing the usual concerns. The methods and strategies for their implementation in practice are presented. Simulation studies and an application to data from an HIV clinical trial demonstrate the performance of the techniques relative to the existing methods. PMID:17960577

  13. Infilling missing hydrological data - methods and consequences

    NASA Astrophysics Data System (ADS)

    Bardossy, A.; Pegram, G. G.

    2013-12-01

    Hydrological observations are often incomplete - equipment malfunction, transmission errors and other technical problems lead to unwanted gaps in observation time series. Furthermore, due to financial and organizational problems, many observation networks are in continuous decline. As an ameliorating stratagem, short time gaps can be filled using information from other locations. The statistics of abandoned stations provide useful information for the process of extending records. In this contribution the authors present different methods for infilling gaps using: - nearest neighbours - simple and multiple linear regression - black box methods (fuzzy and neural nets) - Expectation Maximization - Copula based estimation The methods are used at different time scales for infilling precipitation from daily through pentads and months to years. The copula based estimation provides not only an estimator for the expected value, but also a probability distribution for each of the missing values. Thus the method can be used for conditional simulation of realizations. Observed precipitation data from the Cape region in South Africa are used to illustrate the intercomparison of the methodologies. The consequences of using [or not using] infilling and data extension are illustrated using a hydrological modelling example from South-West Germany.

  14. A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu

    2007-01-01

    Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…

  15. Unsupervised segmentation of polarimetric SAR data using the covariance matrix

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Rama; Dubois, Pascale C.

    1992-01-01

    A method for unsupervised segmentation of polarimetric synthetic aperture radar (SAR) data into classes of homogeneous microwave polarimetric backscatter characteristics is presented. Classes of polarimetric backscatter are selected on the basis of a multidimensional fuzzy clustering of the logarithm of the parameters composing the polarimetric covariance matrix. The clustering procedure uses both polarimetric amplitude and phase information, is adapted to the presence of image speckle, and does not require an arbitrary weighting of the different polarimetric channels; it also provides a partitioning of each data sample used for clustering into multiple clusters. Given the classes of polarimetric backscatter, the entire image is classified using a maximum a posteriori polarimetric classifier. Four-look polarimetric SAR complex data of lava flows and of sea ice acquired by the NASA/JPL airborne polarimetric radar (AIRSAR) are segmented using this technique. The results are discussed and compared with those obtained using supervised techniques.

  16. The impact of covariate measurement error on risk prediction.

    PubMed

    Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna

    2015-07-10

    In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses' Health Study. PMID:25865315

  17. Maternal near miss: an indicator for maternal health and maternal care.

    PubMed

    Chhabra, Pragti

    2014-07-01

    Maternal mortality is one of the important indicators used for the measurement of maternal health. Although maternal mortality ratio remains high, maternal deaths in absolute numbers are rare in a community. To overcome this challenge, maternal near miss has been suggested as a compliment to maternal death. It is defined as pregnant or recently delivered woman who survived a complication during pregnancy, childbirth or 42 days after termination of pregnancy. So far various nomenclature and criteria have been used to identify maternal near-miss cases and there is lack of uniform criteria for identification of near miss. The World Health Organization recently published criteria based on markers of management and organ dysfunction, which would enable systematic data collection on near miss and development of summary estimates. The prevalence of near miss is higher in developing countries and causes are similar to those of maternal mortality namely hemorrhage, hypertensive disorders, sepsis and obstructed labor. Reviewing near miss cases provide significant information about the three delays in health seeking so that appropriate action is taken. It is useful in identifying health system failures and assessment of quality of maternal health-care. Certain maternal near miss indicators have been suggested to evaluate the quality of care. The near miss approach will be an important tool in evaluation and assessment of the newer strategies for improving maternal health.

  18. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  19. Unknown input and state estimation for linear discrete-time systems with missing measurements and correlated noises

    NASA Astrophysics Data System (ADS)

    Shu, Huisheng; Zhang, Sijing; Shen, Bo; Liu, Yurong

    2016-07-01

    This paper is concerned with the problem of simultaneous input and state estimation for a class of linear discrete-time systems with missing measurements and correlated noises. The missing measurements occur in a random way and are governed by a series of mutually independent random variables obeying a certain Bernoulli distribution. The process and measurement noises under consideration are correlated at the same time instant. Our attention is focused on the design of recursive estimators for both input and state such that, for all missing measurements and correlated noises, the estimators are unbiased and the estimation error covariances are minimized. This objective is achieved using direct algebraic operation and the design algorithm for the desired estimators is given. Finally, an illustrative example is presented to demonstrate the effectiveness of the proposed design scheme.

  20. In Search of Missing Baryons

    NASA Astrophysics Data System (ADS)

    Crede, Volker

    2009-11-01

    Nucleons are complex systems of confined quarks and exhibit characteristic spectra of excited states. Highly excited nucleon states are sensitive to details of quark confinement which is poorly understood within Quantum Chromodynamics (QCD), the fundamental theory of strong interactions. Thus, measurements of excited nucleon states and the corresponding determination of their properties are needed to come to a better understanding of how confinement works in nucleons. However, the excited states of the nucleon cannot simply be inferred from cleanly separated spectral lines. Quite the contrary, a spectral analysis in nucleon resonance physics is challenging because of the fact that these resonances are broadly overlapping states which decay into a multitude of final states involving mesons and baryons. To provide a consistent and complete picture of an individual nucleon resonance, the various possible production and decay channels must eventually be treated in a multi-channel framework that permits separating resonance from background contributions. A long-standing question in hadron physics is whether the large number of so-called missing baryon resonances really exists, i.e. experimentally not established baryon states which are predicted by all quark models based on three constituent quark effective degrees of freedom. It is important to emphasize that nearly all existing data on non-strange production of baryon resonances result from Nπ scattering experiments. However, quark models predict strong couplings of these missing states to γp rendering the study of these resonances in photo-induced reactions a very promising approach. Several new states have in fact been proposed in recent experiments. Current and upcoming experiments at Jefferson Laboratory will determine polarization (or spin) observables for photoproduction processes involving baryon resonances. Differences between the predictions for these observables can be large, and so conversely they provide

  1. Toward a Mexican eddy covariance network for carbon cycle science

    NASA Astrophysics Data System (ADS)

    Vargas, Rodrigo; Yépez, Enrico A.

    2011-09-01

    First Annual MexFlux Principal Investigators Meeting; Hermosillo, Sonora, Mexico, 4-8 May 2011; The carbon cycle science community has organized a global network, called FLUXNET, to measure the exchange of energy, water, and carbon dioxide (CO2) between the ecosystems and the atmosphere using the eddy covariance technique. This network has provided unprecedented information for carbon cycle science and global climate change but is mostly represented by study sites in the United States and Europe. Thus, there is an important gap in measurements and understanding of ecosystem dynamics in other regions of the world that are seeing a rapid change in land use. Researchers met under the sponsorship of Red Temática de Ecosistemas and Consejo Nacional de Ciencia y Tecnologia (CONACYT) to discuss strategies to establish a Mexican eddy covariance network (MexFlux) by identifying researchers, study sites, and scientific goals. During the meeting, attendees noted that 10 study sites have been established in Mexico with more than 30 combined years of information. Study sites span from new sites installed during 2011 to others with 9 to 6 years of measurements. Sites with the longest span measurements are located in Baja California Sur (established by Walter Oechel in 2002) and Sonora (established by Christopher Watts in 2005); both are semiarid ecosystems. MexFlux sites represent a variety of ecosystem types, including Mediterranean and sarcocaulescent shrublands in Baja California; oak woodland, subtropical shrubland, tropical dry forest, and a grassland in Sonora; tropical dry forests in Jalisco and Yucatan; a managed grassland in San Luis Potosi; and a managed pine forest in Hidalgo. Sites are maintained with an individual researcher's funds from Mexican government agencies (e.g., CONACYT) and international collaborations, but no coordinated funding exists for a long-term program.

  2. Do goldfish miss the fundamental?

    NASA Astrophysics Data System (ADS)

    Fay, Richard R.

    2003-10-01

    The perception of harmonic complexes was studied in goldfish using classical respiratory conditioning and a stimulus generalization paradigm. Groups of animals were initially conditioned to several harmonic complexes with a fundamental frequency (f0) of 100 Hz. ln some cases the f0 component was present, and in other cases, the f0 component was absent. After conditioning, animals were tested for generalization to novel harmonic complexes having different f0's, some with f0 present and some with f0 absent. Generalization gradients always peaked at 100 Hz, indicating that the pitch value of the conditioning complexes was consistent with the f0, whether or not f0 was present in the conditioning or test complexes. Thus, goldfish do not miss the fundmental with respect to a pitch-like perceptual dimension. However, generalization gradients tended to have different skirt slopes for the f0-present and f0-absent conditioning and test stimuli. This suggests that goldfish distinguish between f0 present/absent stimuli, probably on the basis of a timbre-like perceptual dimension. These and other results demonstrate that goldfish respond to complex sounds as if they possessed perceptual dimensions similar to pitch and timbre as defined for human and other vertebrate listeners. [Work supported by NIH/NIDCD.

  3. Covariates of intravenous paracetamol pharmacokinetics in adults

    PubMed Central

    2014-01-01

    Background Pharmacokinetic estimates for intravenous paracetamol in individual adult cohorts are different to a certain extent, and understanding the covariates of these differences may guide dose individualization. In order to assess covariate effects of intravenous paracetamol disposition in adults, pharmacokinetic data on discrete studies were pooled. Methods This pooled analysis was based on 7 studies, resulting in 2755 time-concentration observations in 189 adults (mean age 46 SD 23 years; weight 73 SD 13 kg) given intravenous paracetamol. The effects of size, age, pregnancy and other clinical settings (intensive care, high dependency, orthopaedic or abdominal surgery) on clearance and volume of distribution were explored using non-linear mixed effects models. Results Paracetamol disposition was best described using normal fat mass (NFM) with allometric scaling as a size descriptor. A three-compartment linear disposition model revealed that the population parameter estimates (between subject variability,%) were central volume (V1) 24.6 (55.5%) L/70 kg with peripheral volumes of distribution V2 23.1 (49.6%) L/70 kg and V3 30.6 (78.9%) L/70 kg. Clearance (CL) was 16.7 (24.6%) L/h/70 kg and inter-compartment clearances were Q2 67.3 (25.7%) L/h/70 kg and Q3 2.04 (71.3%) L/h/70 kg. Clearance and V2 decreased only slightly with age. Sex differences in clearance were minor and of no significance. Clearance, relative to median values, was increased during pregnancy (FPREG = 1.14) and decreased during abdominal surgery (FABDCL = 0.715). Patients undergoing orthopaedic surgery had a reduced V2 (FORTHOV = 0.649), while those in intensive care had increased V2 (FICV = 1.51). Conclusions Size and age are important covariates for paracetamol pharmacokinetics explaining approximately 40% of clearance and V2 variability. Dose individualization in adult subpopulations would achieve little benefit in the scenarios explored. PMID:25342929

  4. Generation of phase-covariant quantum cloning

    SciTech Connect

    Karimipour, V.; Rezakhani, A.T.

    2002-11-01

    It is known that in phase-covariant quantum cloning, the equatorial states on the Bloch sphere can be cloned with a fidelity higher than the optimal bound established for universal quantum cloning. We generalize this concept to include other states on the Bloch sphere with a definite z component of spin. It is shown that once we know the z component, we can always clone a state with a fidelity higher than the universal value and that of equatorial states. We also make a detailed study of the entanglement properties of the output copies and show that the equatorial states are the only states that give rise to a separable density matrix for the outputs.

  5. Baryon Spectrum Analysis using Covariant Constraint Dynamics

    NASA Astrophysics Data System (ADS)

    Whitney, Joshua; Crater, Horace

    2012-03-01

    The energy spectrum of the baryons is determined by treating each of them as a three-body system with the interacting forces coming from a set of two-body potentials that depend on both the distance between the quarks and the spin and orbital angular momentum coupling terms. The Two Body Dirac equations of constraint dynamics derived by Crater and Van Alstine, matched with the quasipotential formalism of Todorov as the underlying two-body formalism are used, as well as the three-body constraint formalism of Sazdjian to integrate the three two-body equations into a single relativistically covariant three body equation for the bound state energies. The results are analyzed and compared to experiment using a best fit method and several different algorithms, including a gradient approach, and Monte Carlo method. Results for all well-known baryons are presented and compared to experiment, with good accuracy.

  6. Covariant Lyapunov analysis of chaotic Kolmogorov flows.

    PubMed

    Inubushi, Masanobu; Kobayashi, Miki U; Takehiro, Shin-ichi; Yamada, Michio

    2012-01-01

    Hyperbolicity is an important concept in dynamical system theory; however, we know little about the hyperbolicity of concrete physical systems including fluid motions governed by the Navier-Stokes equations. Here, we study numerically the hyperbolicity of the Navier-Stokes equation on a two-dimensional torus (Kolmogorov flows) using the method of covariant Lyapunov vectors developed by Ginelli et al. [Phys. Rev. Lett. 99, 130601 (2007)]. We calculate the angle between the local stable and unstable manifolds along an orbit of chaotic solution to evaluate the hyperbolicity. We find that the attractor of chaotic Kolmogorov flows is hyperbolic at small Reynolds numbers, but that smaller angles between the local stable and unstable manifolds are observed at larger Reynolds numbers, and the attractor appears to be nonhyperbolic at a certain Reynolds numbers. Also, we observed some relations between these hyperbolic properties and physical properties such as time correlation of the vorticity and the energy dissipation rate.

  7. EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.

    SciTech Connect

    HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.

    2007-04-22

    The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.

  8. Covariant chronogeometry and extreme distances: Elementary particles

    PubMed Central

    Segal, I. E.; Jakobsen, H. P.; Ørsted, B.; Paneitz, S. M.; Speh, B.

    1981-01-01

    We study a variant of elementary particle theory in which Minkowski space, M0, is replaced by a natural alternative, the unique four-dimensional manifold ¯M with comparable properties of causality and symmetry. Free particles are considered to be associated (i) with positive-energy representations in bundles of prescribed spin over ¯M of the group of causality-preserving transformations on ¯M (or its mass-conserving subgroup) and (ii) with corresponding wave equations. In this study these bundles, representations, and equations are detailed, and some of their basic features are developed in the cases of spins 0 and ½. Preliminaries to a general study are included; issues of covariance, unitarity, and positivity of the energy are treated; appropriate quantum numbers are indicated; and possible physical applications are discussed. PMID:16593075

  9. Covariant entropy bound and loop quantum cosmology

    SciTech Connect

    Ashtekar, Abhay; Wilson-Ewing, Edward

    2008-09-15

    We examine Bousso's covariant entropy bound conjecture in the context of radiation filled, spatially flat, Friedmann-Robertson-Walker models. The bound is violated near the big bang. However, the hope has been that quantum gravity effects would intervene and protect it. Loop quantum cosmology provides a near ideal setting for investigating this issue. For, on the one hand, quantum geometry effects resolve the singularity and, on the other hand, the wave function is sharply peaked at a quantum corrected but smooth geometry, which can supply the structure needed to test the bound. We find that the bound is respected. We suggest that the bound need not be an essential ingredient for a quantum gravity theory but may emerge from it under suitable circumstances.

  10. Covariance of Lucky Images: Performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2016-09-01

    The covariance of ground-based Lucky Images (COELI) is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper we analyze the relevance of the number of processed frames, the frames quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  11. A covariant treatment of cosmic parallax

    SciTech Connect

    Räsänen, Syksy

    2014-03-01

    The Gaia satellite will soon probe parallax on cosmological distances. Using the covariant formalism and considering the angle between a pair of sources, we find parallax for both spacelike and timelike separation between observation points. Our analysis includes both intrinsic parallax and parallax due to observer motion. We propose a consistency condition that tests the FRW metric using the parallax distance and the angular diameter distance. This test is purely kinematic and relies only on geometrical optics, it is independent of matter content and its relation to the spacetime geometry. We study perturbations around the FRW model, and find that they should be taken into account when analysing observations to determine the parallax distance.

  12. Conformal killing tensors and covariant Hamiltonian dynamics

    SciTech Connect

    Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.

    2014-12-15

    A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.

  13. Covariant density functional theory for magnetic rotation

    NASA Astrophysics Data System (ADS)

    Peng, J.; Meng, J.; Ring, P.; Zhang, S. Q.

    2008-08-01

    The tilted axis cranking formalism is implemented in relativistic mean field (RMF) theory. It is used for a microscopic description of magnetic rotation in the framework of covariant density functional theory. We assume that the rotational axis is in the xz plane and consider systems with the two symmetries P (space reflection) and PyT (a combination of a reflection in the y direction and time reversal). A computer code based on these symmetries is developed, and first applications are discussed for the nucleus Gd142: the rotational band based on the configuration πh11/22⊗νh11/2-2 is investigated in a fully microscopic and self-consistent way. The results are compared with available data, such as spectra and electromagnetic transition ratios B(M1)/B(E2). The relation between rotational velocity and angular momentum are discussed in detail together with the shears mechanism characteristic of magnetic rotation.

  14. Covariant generalization of cosmological perturbation theory

    SciTech Connect

    Enqvist, Kari; Hoegdahl, Janne; Nurmi, Sami; Vernizzi, Filippo

    2007-01-15

    We present an approach to cosmological perturbations based on a covariant perturbative expansion between two worldlines in the real inhomogeneous universe. As an application, at an arbitrary order we define an exact scalar quantity which describes the inhomogeneities in the number of e-folds on uniform density hypersurfaces and which is conserved on all scales for a barotropic ideal fluid. We derive a compact form for its conservation equation at all orders and assign it a simple physical interpretation. To make a comparison with the standard perturbation theory, we develop a method to construct gauge-invariant quantities in a coordinate system at arbitrary order, which we apply to derive the form of the nth order perturbation in the number of e-folds on uniform density hypersurfaces and its exact evolution equation. On large scales, this provides the gauge-invariant expression for the curvature perturbation on uniform density hypersurfaces and its evolution equation at any order.

  15. A covariance analysis algorithm for interconnected systems

    NASA Technical Reports Server (NTRS)

    Cheng, Victor H. L.; Curley, Robert D.; Lin, Ching-An

    1987-01-01

    A covariance analysis algorithm for propagation of signal statistics in arbitrarily interconnected nonlinear systems is presented which is applied to six-degree-of-freedom systems. The algorithm uses statistical linearization theory to linearize the nonlinear subsystems, and the resulting linearized subsystems are considered in the original interconnection framework for propagation of the signal statistics. Some nonlinearities commonly encountered in six-degree-of-freedom space-vehicle models are referred to in order to illustrate the limitations of this method, along with problems not encountered in standard deterministic simulation analysis. Moreover, the performance of the algorithm shall be numerically exhibited by comparing results using such techniques to Monte Carlo analysis results, both applied to a simple two-dimensional space-intercept problem.

  16. [Systematic review of near miss maternal morbidity].

    PubMed

    Souza, João Paulo; Cecatti, José Guilherme; Parpinelli, Mary Angela; de Sousa, Maria Helena; Serruya, Suzanne Jacob

    2006-02-01

    This systematic literature review on maternal near miss aims to evaluate data on the incidence and different operational definitions of near miss. An electronic search was performed in databases of scientific journals and also in the references of the identified studies. Initially, 1,247 studies were identified, 35 of which were comprehensively assessed, with 17 excluded and 18 included. Review of reference lists from these articles identified an additional 20 articles, thus completing 38 studies included: 20 adopting definitions of near miss related to management complexity, 6 to organ dysfunction, 2 with a mixed definition, and 10 according to symptoms, signs, or specific clinical entities. The mean near miss ratio was 8.2/1,000 live births, the maternal mortality index was 6.3%, and the case/fatality ratio was 16:1. The study concluded that there was a trend towards higher incidence of near miss in developing countries and when using near miss definitions by organ dysfunction. The study of near miss maternal morbidity can help improve obstetric care and support the struggle against maternal mortality.

  17. A Product Partition Model With Regression on Covariates

    PubMed Central

    Müller, Peter; Quintana, Fernando; Rosner, Gary L.

    2011-01-01

    We propose a probability model for random partitions in the presence of covariates. In other words, we develop a model-based clustering algorithm that exploits available covariates. The motivating application is predicting time to progression for patients in a breast cancer trial. We proceed by reporting a weighted average of the responses of clusters of earlier patients. The weights should be determined by the similarity of the new patient’s covariate with the covariates of patients in each cluster. We achieve the desired inference by defining a random partition model that includes a regression on covariates. Patients with similar covariates are a priori more likely to be clustered together. Posterior predictive inference in this model formalizes the desired prediction. We build on product partition models (PPM). We define an extension of the PPM to include a regression on covariates by including in the cohesion function a new factor that increases the probability of experimental units with similar covariates to be included in the same cluster. We discuss implementations suitable for any combination of continuous, categorical, count, and ordinal covariates. An implementation of the proposed model as R-package is available for download. PMID:21566678

  18. Performance of internal covariance estimators for cosmic shear correlation functions

    SciTech Connect

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.

  19. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGESBeta

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  20. Management of a congenitally missing maxillary central incisor. A case study.

    PubMed

    Tichler, Howard M; Abraham, Jenny E

    2007-03-01

    When a maxillary lateral incisor is missing, often the treatment options can be clearly defined, that is, substitute an adjacent tooth for the missing one; open the space for an implant, a bonded bridge or fixed bridge. When a maxillary central incisor is missing and the space for the tooth is absent, the treatment choices become complicated, especially in a growing child. There must be multi-disciplinary coordination among the restorative dentist, the oral surgeon or periodontist, and the orthodontist to obtain the optimum result. At the initiation of treatment, this information must be relayed and the treatment plan agreed upon by the patient or the parents of the patient.

  1. Point pattern analysis with spatially varying covariate effects, applied to the study of cerebrovascular deaths.

    PubMed

    Pinto Junior, Jony Arrais; Gamerman, Dani; Paez, Marina Silva; Fonseca Alves, Regina Helena

    2015-03-30

    This article proposes a modeling approach for handling spatial heterogeneity present in the study of the geographical pattern of deaths due to cerebrovascular disease.The framework involvesa point pattern analysis with components exhibiting spatial variation. Preliminary studies indicate that mortality of this disease and the effect of relevant covariates do not exhibit uniform geographic distribution. Our model extends a previously proposed model in the literature that uses spatial and non-spatial variables by allowing for spatial variation of the effect of non-spatial covariates. A number of relative risk indicators are derived by comparing different covariate levels, different geographic locations, or both. The methodology is applied to the study of the geographical death pattern of cerebrovascular deaths in the city of Rio de Janeiro. The results compare well against existing alternatives, including fixed covariate effects. Our model is able to capture and highlight important data information that would not be noticed otherwise, providing information that is required for appropriate health decision-making.

  2. Testing of NASA LaRC Materials under MISSE 6 and MISSE 7 Missions

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S.

    2009-01-01

    The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment for several months. MISSE missions provide an opportunity for developing space qualifiable materials. Two lasers and a few optical components from NASA Langley Research Center (LaRC) were included in the MISSE 6 mission for long term exposure. MISSE 6 items were characterized and packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. MISSE 6 was transported to the international Space Station (ISS) via STS 123 on March 11. 2008. The astronauts successfully attached the PEC to external handrails of the ISS and opened the PEC for long term exposure to the space environment. The current plan is to bring the MISSE 6 PEC back to the Earth via STS 128 mission scheduled for launch in August 2009. Currently, preparations for launching the MISSE 7 mission are progressing. Laser and lidar components assembled on a flight-worthy platform are included from NASA LaRC. MISSE 7 launch is scheduled to be launched on STS 129 mission. This paper will briefly review recent efforts on MISSE 6 and MISSE 7 missions at NASA Langley Research Center (LaRC).

  3. Proposed interventions to decrease the frequency of missed test results.

    PubMed

    Wahls, Terry L; Cram, Peter

    2009-09-01

    Numerous studies have identified that delays in diagnosis related to the mishandling of abnormal test results are an import contributor to diagnostic errors. Factors contributing to missed results included organizational factors, provider factors and patient-related factors. At the diagnosis error conference continuing medical education conference in 2008, attendees attended two focus groups dedicated to identification of strategies to lower the frequency of missed results. The recommendations were reviewed and summarized. Improved standardization of the steps involved in the flow of test result information, greater involvement of patients to insure the follow up of test results, and systems re-engineering to improve the management and presentation of data. Focusing the initial interventions on the specific tests which have been identified as high risk for adverse impact on patient outcomes such as tests associated with a possible malignancy or acute coronary syndrome will likely have the most significant impact on clinical outcome and patient satisfaction with care. PMID:19669920

  4. Nuclear Forensics Analysis with Missing and Uncertain Data

    DOE PAGESBeta

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  5. Methods for estimation of covariance matrices and covariance components for the Hanford Waste Vitrification Plant Process

    SciTech Connect

    Bryan, M.F.; Piepel, G.F.; Simpson, D.B.

    1996-03-01

    The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.

  6. Covariant Spectator Theory: Foundations and Applications A Mini-Review of the Covariant Spectator Theory

    SciTech Connect

    Alfred Stadler, Franz Gross

    2010-10-01

    We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.

  7. Assessing Trait Covariation and Morphological Integration on Phylogenies Using Evolutionary Covariance Matrices

    PubMed Central

    Adams, Dean C.; Felice, Ryan N.

    2014-01-01

    Morphological integration describes the degree to which sets of organismal traits covary with one another. Morphological covariation may be evaluated at various levels of biological organization, but when characterizing such patterns across species at the macroevolutionary level, phylogeny must be taken into account. We outline an analytical procedure based on the evolutionary covariance matrix that allows species-level patterns of morphological integration among structures defined by sets of traits to be evaluated while accounting for the phylogenetic relationships among taxa, providing a flexible and robust complement to related phylogenetic independent contrasts based approaches. Using computer simulations under a Brownian motion model we show that statistical tests based on the approach display appropriate Type I error rates and high statistical power for detecting known levels of integration, and these trends remain consistent for simulations using different numbers of species, and for simulations that differ in the number of trait dimensions. Thus, our procedure provides a useful means of testing hypotheses of morphological integration in a phylogenetic context. We illustrate the utility of this approach by evaluating evolutionary patterns of morphological integration in head shape for a lineage of Plethodon salamanders, and find significant integration between cranial shape and mandible shape. Finally, computer code written in R for implementing the procedure is provided. PMID:24728003

  8. Some Activities of MISSE 6 Mission

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S.

    2009-01-01

    The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment for several months. In this paper, a few laser and optical elements from NASA Langley Research Center (LaRC) that have been flown on MISSE 6 mission will be discussed. These items were characterized and packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. Subsequently, the MISSE 6 PEC was transported by the STS-123 mission to International Space Station (ISS) on March 11, 2008. The astronauts successfully attached the PEC to external handrails and opened the PEC for long term exposure to the space environment. The plan is to retrieve the MISSE 6 PEC by STS-128 mission in August 2009.

  9. MISSE 6-Testing Materials in Space

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S; Kinard, William H.

    2008-01-01

    The objective of the Materials International Space Station Experiment (MISSE) is to study the performance of novel materials when subjected to the synergistic effects of the harsh space environment by placing them in space environment for several months. In this paper, a few materials and components from NASA Langley Research Center (LaRC) that have been flown on MISSE 6 mission will be discussed. These include laser and optical elements for photonic devices. The pre-characterized MISSE 6 materials were packed inside a ruggedized Passive Experiment Container (PEC) that resembles a suitcase. The PEC was tested for survivability due to launch conditions. Subsequently, the MISSE 6 PEC was transported by the STS-123 mission to International Space Station (ISS) on March 11, 2008. The astronauts successfully attached the PEC to external handrails and opened the PEC for long term exposure to the space environment.

  10. Diet History Questionnaire II: Missing & Error Codes

    Cancer.gov

    A missing code indicates that the respondent skipped a question when a response was required. An error character indicates that the respondent marked two or more responses to a question where only one answer was appropriate.

  11. ADHD More Often Missed in Minority Kids

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_160571.html ADHD More Often Missed in Minority Kids Study found ... percentage of black children show the symptoms of attention-deficit/hyperactivity disorder (ADHD) than white kids, they are less likely ...

  12. Missed Radiation Therapy and Cancer Recurrence

    Cancer.gov

    Patients who miss radiation therapy sessions during cancer treatment have an increased risk of their disease returning, even if they eventually complete their course of radiation treatment, according to a new study.

  13. Discovery of a missing disease spreader

    NASA Astrophysics Data System (ADS)

    Maeno, Yoshiharu

    2011-10-01

    This study presents a method to discover an outbreak of an infectious disease in a region for which data are missing, but which is at work as a disease spreader. Node discovery for the spread of an infectious disease is defined as discriminating between the nodes which are neighboring to a missing disease spreader node, and the rest, given a dataset on the number of cases. The spread is described by stochastic differential equations. A perturbation theory quantifies the impact of the missing spreader on the moments of the number of cases. Statistical discriminators examine the mid-body or tail-ends of the probability density function, and search for the disturbance from the missing spreader. They are tested with computationally synthesized datasets, and applied to the SARS outbreak and flu pandemic.

  14. Contextualized Network Analysis: Theory and Methods for Networks with Node Covariates

    NASA Astrophysics Data System (ADS)

    Binkiewicz, Norbert M.

    Biological and social systems consist of myriad interacting units. The interactions can be intuitively represented in the form of a graph or network. Measurements of these graphs can reveal the underlying structure of these interactions, which provides insight into the systems that generated the graphs. Moreover, in applications such as neuroconnectomics, social networks, and genomics, graph data is accompanied by contextualizing measures on each node. We leverage these node covariates to help uncover latent communities, using a modification of spectral clustering. Statistical guarantees are provided under a joint mixture model called the node contextualized stochastic blockmodel, including a bound on the mis-clustering rate. For most simulated conditions, covariate assisted spectral clustering yields superior results relative to both regularized spectral clustering without node covariates and an adaptation of canonical correlation analysis. We apply covariate assisted spectral clustering to large brain graphs derived from diffusion MRI, using the node locations or neurological regions as covariates. In both cases, covariate assisted spectral clustering yields clusters that are easier to interpret neurologically. A low rank update algorithm is developed to reduce the computational cost of determining the tuning parameter for covariate assisted spectral clustering. As simulations demonstrate, the low rank update algorithm increases the speed of covariate assisted spectral clustering up to ten-fold, while practically matching the clustering performance of the standard algorithm. Graphs with node attributes are sometimes accompanied by ground truth labels that align closely with the latent communities in the graph. We consider the example of a mouse retina neuron network accompanied by the neuron spatial location and neuronal cell types. In this example, the neuronal cell type is considered a ground truth label. Current approaches for defining neuronal cell type vary

  15. Analysis of longitudinal data from animals with missing values using SPSS.

    PubMed

    Duricki, Denise A; Soleman, Sara; Moon, Lawrence D F

    2016-06-01

    Testing of therapies for disease or injury often involves the analysis of longitudinal data from animals. Modern analytical methods have advantages over conventional methods (particularly when some data are missing), yet they are not used widely by preclinical researchers. Here we provide an easy-to-use protocol for the analysis of longitudinal data from animals, and we present a click-by-click guide for performing suitable analyses using the statistical package IBM SPSS Statistics software (SPSS). We guide readers through the analysis of a real-life data set obtained when testing a therapy for brain injury (stroke) in elderly rats. If a few data points are missing, as in this example data set (for example, because of animal dropout), repeated-measures analysis of covariance may fail to detect a treatment effect. An alternative analysis method, such as the use of linear models (with various covariance structures), and analysis using restricted maximum likelihood estimation (to include all available data) can be used to better detect treatment effects. This protocol takes 2 h to carry out.

  16. Winnicott and Lacan: a missed encounter?

    PubMed

    Vanier, Alain

    2012-04-01

    Winnicott was able to say that Lacan's paper on the mirror stage "had certainly influenced" him, while Lacan argued that he found his object a in Winnicott's transitional object. By following the development of their personal relations, as well as of their theoretical discussions, it is possible to argue that this was a missed encounter--yet a happily missed one, since the misunderstandings of their theoretical exchanges allowed each of them to clarify concepts otherwise difficult to discern.

  17. Estimated Environmental Exposures for MISSE-7B

    NASA Technical Reports Server (NTRS)

    Finckenor, Miria M.; Moore, Chip; Norwood, Joseph K.; Henrie, Ben; DeGroh, Kim

    2012-01-01

    This paper details the 18-month environmental exposure for Materials International Space Station Experiment 7B (MISSE-7B) ram and wake sides. This includes atomic oxygen, ultraviolet radiation, particulate radiation, thermal cycling, meteoroid/space debris impacts, and observed contamination. Atomic oxygen fluence was determined by measured mass and thickness loss of polymers of known reactivity. Diodes sensitive to ultraviolet light actively measured solar radiation incident on the experiment. Comparisons to earlier MISSE flights are discussed.

  18. A Bayesian Semiparametric Multivariate Causal Model, with Automatic Covariate Selection and for Possibly-Nonignorable Missing Data

    ERIC Educational Resources Information Center

    Karabatsos, G.; Walker, S.G.

    2010-01-01

    Causal inference is central to educational research, where in data analysis the aim is to learn the causal effects of educational treatments on academic achievement, to evaluate educational policies and practice. Compared to a correlational analysis, a causal analysis enables policymakers to make more meaningful statements about the efficacy of…

  19. Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies

    ERIC Educational Resources Information Center

    Chen, Jianshen; Kaplan, David

    2015-01-01

    Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…

  20. Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Lee, Taehun; Cai, Li

    2012-01-01

    Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…

  1. Universal and phase-covariant superbroadcasting for mixed qubit states

    SciTech Connect

    Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara; Perinotti, Paolo

    2006-10-15

    We describe a general framework to study covariant symmetric broadcasting maps for mixed qubit states. We explicitly derive the optimal N{yields}M superbroadcasting maps, achieving optimal purification of the single-site output copy, in both the universal and phase-covariant cases. We also study the bipartite entanglement properties of the superbroadcast states.

  2. Handling Correlations between Covariates and Random Slopes in Multilevel Models

    ERIC Educational Resources Information Center

    Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders

    2014-01-01

    This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…

  3. The Regression Trunk Approach to Discover Treatment Covariate Interaction

    ERIC Educational Resources Information Center

    Dusseldorp, Elise; Meulman, Jacqueline J.

    2004-01-01

    The regression trunk approach (RTA) is an integration of regression trees and multiple linear regression analysis. In this paper RTA is used to discover treatment covariate interactions, in the regression of one continuous variable on a treatment variable with "multiple" covariates. The performance of RTA is compared to the classical method of…

  4. Part Marking and Identification Materials' for MISSE

    NASA Technical Reports Server (NTRS)

    Roxby, Donald; Finckenor, Miria M.

    2008-01-01

    The Materials on International Space Station Experiment (MISSE) is being conducted with funding from NASA and the U.S. Department of Defense, in order to evaluate candidate materials and processes for flight hardware. MISSE modules include test specimens used to validate NASA technical standards for part markings exposed to harsh environments in low-Earth orbit and space, including: atomic oxygen, ultraviolet radiation, thermal vacuum cycling, and meteoroid and orbital debris impact. Marked test specimens are evaluated and then mounted in a passive experiment container (PEC) that is affixed to an exterior surface on the International Space Station (ISS). They are exposed to atomic oxygen and/or ultraviolet radiation for a year or more before being retrieved and reevaluated. Criteria include percent contrast, axial uniformity, print growth, error correction, and overall grade. MISSE 1 and 2 (2001-2005), MISSE 3 and 4 (2006-2007), and MISSE 5 (2005-2006) have been completed to date. Acceptable results were found for test specimens marked with Data Matrix(TradeMark) symbols by Intermec Inc. and Robotic Vision Systems Inc using: laser bonding, vacuum arc vapor deposition, gas assisted laser etch, chemical etch, mechanical dot peening, laser shot peening, laser etching, and laser induced surface improvement. MISSE 6 (2008-2009) is exposing specimens marked by DataLase(Registed TradeMark), Chemico technologies Inc., Intermec Inc., and tesa with laser-markable paint, nanocode tags, DataLase and tesa laser markings, and anodized metal labels.

  5. MISSE 1 and 2 Tray Temperature Measurements

    NASA Technical Reports Server (NTRS)

    Harvey, Gale A.; Kinard, William H.

    2006-01-01

    The Materials International Space Station Experiment (MISSE 1 & 2) was deployed August 10,2001 and retrieved July 30,2005. This experiment is a co-operative endeavor by NASA-LaRC. NASA-GRC, NASA-MSFC, NASA-JSC, the Materials Laboratory at the Air Force Research Laboratory, and the Boeing Phantom Works. The objective of the experiment is to evaluate performance, stability, and long term survivability of materials and components planned for use by NASA and DOD on future LEO, synchronous orbit, and interplanetary space missions. Temperature is an important parameter in the evaluation of space environmental effects on materials. The MISSE 1 & 2 had autonomous temperature data loggers to measure the temperature of each of the four experiment trays. The MISSE tray-temperature data loggers have one external thermistor data channel, and a 12 bit digital converter. The MISSE experiment trays were exposed to the ISS space environment for nearly four times the nominal design lifetime for this experiment. Nevertheless, all of the data loggers provided useful temperature measurements of MISSE. The temperature measurement system has been discussed in a previous paper. This paper presents temperature measurements of MISSE payload experiment carriers (PECs) 1 and 2 experiment trays.

  6. Covariate-adjusted response-adaptive designs for binary response.

    PubMed

    Rosenberger, W F; Vidyashankar, A N; Agarwal, D K

    2001-11-01

    An adaptive allocation design for phase III clinical trials that incorporates covariates is described. The allocation scheme maps the covariate-adjusted odds ratio from a logistic regression model onto [0, 1]. Simulations assume that both staggered entry and time to response are random and follow a known probability distribution that can depend on the treatment assigned, the patient's response, a covariate, or a time trend. Confidence intervals on the covariate-adjusted odds ratio is slightly anticonservative for the adaptive design under the null hypothesis, but power is similar to equal allocation under various alternatives for n = 200. For similar power, the net savings in terms of expected number of treatment failures is modest, but enough to make this design attractive for certain studies where known covariates are expected to be important and stratification is not desired, and treatment failures have a high ethical cost.

  7. Estimation of Covariances on Prompt Fission Neutron Spectra and Impact of the PFNS Model on the Vessel Fluence

    NASA Astrophysics Data System (ADS)

    Berge, Léonie; Litaize, Olivier; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Pénéliau, Yannick; Regnier, David

    2016-02-01

    As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS) are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.

  8. Correcting eddy-covariance flux underestimates over a grassland.

    SciTech Connect

    Twine, T. E.; Kustas, W. P.; Norman, J. M.; Cook, D. R.; Houser, P. R.; Meyers, T. P.; Prueger, J. H.; Starks, P. J.; Wesely, M. L.; Environmental Research; Univ. of Wisconsin at Madison; DOE; National Aeronautics and Space Administration; National Oceanic and Atmospheric Administrationoratory

    2000-06-08

    Independent measurements of the major energy balance flux components are not often consistent with the principle of conservation of energy. This is referred to as a lack of closure of the surface energy balance. Most results in the literature have shown the sum of sensible and latent heat fluxes measured by eddy covariance to be less than the difference between net radiation and soil heat fluxes. This under-measurement of sensible and latent heat fluxes by eddy-covariance instruments has occurred in numerous field experiments and among many different manufacturers of instruments. Four eddy-covariance systems consisting of the same models of instruments were set up side-by-side during the Southern Great Plains 1997 Hydrology Experiment and all systems under-measured fluxes by similar amounts. One of these eddy-covariance systems was collocated with three other types of eddy-covariance systems at different sites; all of these systems under-measured the sensible and latent-heat fluxes. The net radiometers and soil heat flux plates used in conjunction with the eddy-covariance systems were calibrated independently and measurements of net radiation and soil heat flux showed little scatter for various sites. The 10% absolute uncertainty in available energy measurements was considerably smaller than the systematic closure problem in the surface energy budget, which varied from 10 to 30%. When available-energy measurement errors are known and modest, eddy-covariance measurements of sensible and latent heat fluxes should be adjusted for closure. Although the preferred method of energy balance closure is to maintain the Bowen-ratio, the method for obtaining closure appears to be less important than assuring that eddy-covariance measurements are consistent with conservation of energy. Based on numerous measurements over a sorghum canopy, carbon dioxide fluxes, which are measured by eddy covariance, are underestimated by the same factor as eddy covariance evaporation

  9. Supergeometry in Locally Covariant Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Hack, Thomas-Paul; Hanisch, Florian; Schenkel, Alexander

    2016-03-01

    In this paper we analyze supergeometric locally covariant quantum field theories. We develop suitable categories SLoc of super-Cartan supermanifolds, which generalize Lorentz manifolds in ordinary quantum field theory, and show that, starting from a few representation theoretic and geometric data, one can construct a functor A : SLoc to S* Alg to the category of super-*-algebras, which can be interpreted as a non-interacting super-quantum field theory. This construction turns out to disregard supersymmetry transformations as the morphism sets in the above categories are too small. We then solve this problem by using techniques from enriched category theory, which allows us to replace the morphism sets by suitable morphism supersets that contain supersymmetry transformations as their higher superpoints. We construct super-quantum field theories in terms of enriched functors eA : eSLoc to eS* Alg between the enriched categories and show that supersymmetry transformations are appropriately described within the enriched framework. As examples we analyze the superparticle in 1|1-dimensions and the free Wess-Zumino model in 3|2-dimensions.

  10. Holographic bound in covariant loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Tamaki, Takashi

    2016-07-01

    We investigate puncture statistics based on the covariant area spectrum in loop quantum gravity. First, we consider Maxwell-Boltzmann statistics with a Gibbs factor for punctures. We establish formulas which relate physical quantities such as horizon area to the parameter characterizing holographic degrees of freedom. We also perform numerical calculations and obtain consistency with these formulas. These results tell us that the holographic bound is satisfied in the large area limit and the correction term of the entropy-area law can be proportional to the logarithm of the horizon area. Second, we also consider Bose-Einstein statistics and show that the above formulas are also useful in this case. By applying the formulas, we can understand intrinsic features of Bose-Einstein condensate which corresponds to the case when the horizon area almost consists of punctures in the ground state. When this phenomena occurs, the area is approximately constant against the parameter characterizing the temperature. When this phenomena is broken, the area shows rapid increase which suggests the phase transition from quantum to classical area.

  11. Super-sample covariance in simulations

    NASA Astrophysics Data System (ADS)

    Li, Yin; Hu, Wayne; Takada, Masahiro

    2014-04-01

    Using separate universe simulations, we accurately quantify super-sample covariance (SSC), the typically dominant sampling error for matter power spectrum estimators in a finite volume, which arises from the presence of super survey modes. By quantifying the power spectrum response to a background mode, this approach automatically captures the separate effects of beat coupling in the quasilinear regime, halo sample variance in the nonlinear regime and a new dilation effect which changes scales in the power spectrum coherently across the survey volume, including the baryon acoustic oscillation scale. It models these effects at typically the few percent level or better with a handful of small volume simulations for any survey geometry compared with directly using many thousands of survey volumes in a suite of large-volume simulations. The stochasticity of the response is sufficiently small that in the quasilinear regime, SSC can be alternately included by fitting the mean density in the volume with these fixed templates in parameter estimation. We also test the halo model prescription and find agreement typically at better than the 10% level for the response.

  12. Generalized Covariant Gyrokinetic Dynamics of Magnetoplasmas

    SciTech Connect

    Cremaschini, C.; Tessarotto, M.; Nicolini, P.; Beklemishev, A.

    2008-12-31

    A basic prerequisite for the investigation of relativistic astrophysical magnetoplasmas, occurring typically in the vicinity of massive stellar objects (black holes, neutron stars, active galactic nuclei, etc.), is the accurate description of single-particle covariant dynamics, based on gyrokinetic theory (Beklemishev et al., 1999-2005). Provided radiation-reaction effects are negligible, this is usually based on the assumption that both the space-time metric and the EM fields (in particular the magnetic field) are suitably prescribed and are considered independent of single-particle dynamics, while allowing for the possible presence of gravitational/EM perturbations driven by plasma collective interactions which may naturally arise in such systems. The purpose of this work is the formulation of a generalized gyrokinetic theory based on the synchronous variational principle recently pointed out (Tessarotto et al., 2007) which permits to satisfy exactly the physical realizability condition for the four-velocity. The theory here developed includes the treatment of nonlinear perturbations (gravitational and/or EM) characterized locally, i.e., in the rest frame of a test particle, by short wavelength and high frequency. Basic feature of the approach is to ensure the validity of the theory both for large and vanishing parallel electric field. It is shown that the correct treatment of EM perturbations occurring in the presence of an intense background magnetic field generally implies the appearance of appropriate four-velocity corrections, which are essential for the description of single-particle gyrokinetic dynamics.

  13. A Hebbian feedback covariance learning paradigm for self-tuning optimal control.

    PubMed

    Young, D L; Poon, C S

    2001-01-01

    We propose a novel adaptive optimal control paradigm inspired by Hebbian covariance synaptic adaptation, a preeminent model of learning and memory as well as other malleable functions in the brain. The adaptation is driven by the spontaneous fluctuations in the system input and output, the covariance of which provides useful information about the changes in the system behavior. The control structure represents a novel form of associative reinforcement learning in which the reinforcement signal is implicitly given by the covariance of the input-output (I/O) signals. Theoretical foundations for the paradigm are derived using Lyapunov theory and are verified by means of computer simulations. The learning algorithm is applicable to a general class of nonlinear adaptive control problems. This on-line direct adaptive control method benefits from a computationally straightforward design, proof of convergence, no need for complete system identification, robustness to noise and uncertainties, and the ability to optimize a general performance criterion in terms of system states and control signals. These attractive properties of Hebbian feedback covariance learning control lend themselves to future investigations into the computational functions of synaptic plasticity in biological neurons.

  14. Predicting the risk of toxic blooms of golden alga from cell abundance and environmental covariates

    USGS Publications Warehouse

    Patino, Reynaldo; VanLandeghem, Matthew M.; Denny, Shawn

    2016-01-01

    Golden alga (Prymnesium parvum) is a toxic haptophyte that has caused considerable ecological damage to marine and inland aquatic ecosystems worldwide. Studies focused primarily on laboratory cultures have indicated that toxicity is poorly correlated with the abundance of golden alga cells. This relationship, however, has not been rigorously evaluated in the field where environmental conditions are much different. The ability to predict toxicity using readily measured environmental variables and golden alga abundance would allow managers rapid assessments of ichthyotoxicity potential without laboratory bioassay confirmation, which requires additional resources to accomplish. To assess the potential utility of these relationships, several a priori models relating lethal levels of golden alga ichthyotoxicity to golden alga abundance and environmental covariates were constructed. Model parameters were estimated using archived data from four river basins in Texas and New Mexico (Colorado, Brazos, Red, Pecos). Model predictive ability was quantified using cross-validation, sensitivity, and specificity, and the relative ranking of environmental covariate models was determined by Akaike Information Criterion values and Akaike weights. Overall, abundance was a generally good predictor of ichthyotoxicity as cross validation of golden alga abundance-only models ranged from ∼ 80% to ∼ 90% (leave-one-out cross-validation). Environmental covariates improved predictions, especially the ability to predict lethally toxic events (i.e., increased sensitivity), and top-ranked environmental covariate models differed among the four basins. These associations may be useful for monitoring as well as understanding the abiotic factors that influence toxicity during blooms.

  15. Phenotypic covariance structure and its divergence for acoustic mate attraction signals among four cricket species

    PubMed Central

    Bertram, Susan M; Fitzsimmons, Lauren P; McAuley, Emily M; Rundle, Howard D; Gorelick, Root

    2012-01-01

    The phenotypic variance–covariance matrix (P) describes the multivariate distribution of a population in phenotypic space, providing direct insight into the appropriateness of measured traits within the context of multicollinearity (i.e., do they describe any significant variance that is independent of other traits), and whether trait covariances restrict the combinations of phenotypes available to selection. Given the importance of P, it is therefore surprising that phenotypic covariances are seldom jointly analyzed and that the dimensionality of P has rarely been investigated in a rigorous statistical framework. Here, we used a repeated measures approach to quantify P separately for populations of four cricket species using seven acoustic signaling traits thought to enhance mate attraction. P was of full or almost full dimensionality in all four species, indicating that all traits conveyed some information that was independent of the other traits, and that phenotypic trait covariances do not constrain the combinations of signaling traits available to selection. P also differed significantly among species, although the dominant axis of phenotypic variation (pmax) was largely shared among three of the species (Acheta domesticus, Gryllus assimilis, G. texensis), but different in the fourth (G. veletis). In G. veletis and A. domesticus, but not G. assimilis and G. texensis, pmax was correlated with body size, while pmax was not correlated with residual mass (a condition measure) in any of the species. This study reveals the importance of jointly analyzing phenotypic traits. PMID:22408735

  16. Parallel ICA identifies sub-components of resting state networks that covary with behavioral indices

    PubMed Central

    Meier, Timothy B.; Wildenberg, Joseph C.; Liu, Jingyu; Chen, Jiayu; Calhoun, Vince D.; Biswal, Bharat B.; Meyerand, Mary E.; Birn, Rasmus M.; Prabhakaran, Vivek

    2012-01-01

    Parallel Independent Component Analysis (para-ICA) is a multivariate method that can identify complex relationships between different data modalities by simultaneously performing Independent Component Analysis on each data set while finding mutual information between the two data sets. We use para-ICA to test the hypothesis that spatial sub-components of common resting state networks (RSNs) covary with specific behavioral measures. Resting state scans and a battery of behavioral indices were collected from 24 younger adults. Group ICA was performed and common RSNs were identified by spatial correlation to publically available templates. Nine RSNs were identified and para-ICA was run on each network with a matrix of behavioral measures serving as the second data type. Five networks had spatial sub-components that significantly correlated with behavioral components. These included a sub-component of the temporo-parietal attention network that differentially covaried with different trial-types of a sustained attention task, sub-components of default mode networks that covaried with attention and working memory tasks, and a sub-component of the bilateral frontal network that split the left inferior frontal gyrus into three clusters according to its cytoarchitecture that differentially covaried with working memory performance. Additionally, we demonstrate the validity of para-ICA in cases with unbalanced dimensions using simulated data. PMID:23087635

  17. Reconstruction of missing daily streamflow data using dynamic regression models

    NASA Astrophysics Data System (ADS)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  18. Influence of covariance between random effects in design for nonlinear mixed-effect models with an illustration in pediatric pharmacokinetics.

    PubMed

    Dumont, Cyrielle; Chenel, Marylore; Mentré, France

    2014-01-01

    Nonlinear mixed-effect models are used increasingly during drug development. For design, an alternative to simulations is based on the Fisher information matrix. Its expression was derived using a first-order approach, was then extended to include covariance and implemented into the R function PFIM. The impact of covariance on standard errors, amount of information, and optimal designs was studied. It was also shown how standard errors can be predicted analytically within the framework of rich individual data without the model. The results were illustrated by applying this extension to the design of a pharmacokinetic study of a drug in pediatric development.

  19. Generation of covariance data among values from a single set of experiments

    SciTech Connect

    Smith, D.L.

    1992-01-01

    Modern nuclear data evaluation methods demand detailed uncertainty information for all input results to be considered. It can be shown from basic statistical principles that provision of a covariance matrix for a set of data provides the necessary information for its proper consideration in the context of other included experimental data and/or a priori representations of the physical parameters in question. This paper examines how an experimenter should go about preparing the covariance matrix for any single experimental data set he intends to report. The process involves detailed examination of the experimental procedures, identification of all error sources (both random and systematic); and consideration of any internal discrepancies. Some specific examples are given to illustrate the methods and principles involved.

  20. Generation of covariance data among values from a single set of experiments

    SciTech Connect

    Smith, D.L.

    1992-12-01

    Modern nuclear data evaluation methods demand detailed uncertainty information for all input results to be considered. It can be shown from basic statistical principles that provision of a covariance matrix for a set of data provides the necessary information for its proper consideration in the context of other included experimental data and/or a priori representations of the physical parameters in question. This paper examines how an experimenter should go about preparing the covariance matrix for any single experimental data set he intends to report. The process involves detailed examination of the experimental procedures, identification of all error sources (both random and systematic); and consideration of any internal discrepancies. Some specific examples are given to illustrate the methods and principles involved.

  1. Communication: Three-fold covariance imaging of laser-induced Coulomb explosions

    NASA Astrophysics Data System (ADS)

    Pickering, James D.; Amini, Kasra; Brouard, Mark; Burt, Michael; Bush, Ian J.; Christensen, Lauge; Lauer, Alexandra; Nielsen, Jens H.; Slater, Craig S.; Stapelfeldt, Henrik

    2016-04-01

    We apply a three-fold covariance imaging method to analyse previously acquired data [C. S. Slater et al., Phys. Rev. A 89, 011401(R) (2014)] on the femtosecond laser-induced Coulomb explosion of spatially pre-aligned 3,5-dibromo-3',5'-difluoro-4'-cyanobiphenyl molecules. The data were acquired using the "Pixel Imaging Mass Spectrometry" camera. We show how three-fold covariance imaging of ionic photofragment recoil trajectories can be used to provide new information about the parent ion's molecular structure prior to its Coulomb explosion. In particular, we show how the analysis may be used to obtain information about molecular conformation and provide an alternative route for enantiomer determination.

  2. A Covariance Analysis Tool for Assessing Fundamental Limits of SIM Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2007-01-01

    This paper presents a performance analysis of the instrument pointing control system for NASA's Space Interferometer Mission (SIM). SIM has a complex pointing system that uses a fast steering mirror in combination with a multirate control architecture to blend feed forward information with feedback information. A pointing covariance analysis tool (PCAT) is developed specifically to analyze systems with such complexity. The development of PCAT as a mathematical tool for covariance analysis is outlined in the paper. PCAT is then applied to studying performance of SIM's science pointing system. The analysis reveals and clearly delineates a fundamental limit that exists for SIM pointing performance. The limit is especially stringent for dim star targets. Discussion of the nature of the performance limit is provided, and methods are suggested to potentially improve pointing performance.

  3. Communication: Three-fold covariance imaging of laser-induced Coulomb explosions.

    PubMed

    Pickering, James D; Amini, Kasra; Brouard, Mark; Burt, Michael; Bush, Ian J; Christensen, Lauge; Lauer, Alexandra; Nielsen, Jens H; Slater, Craig S; Stapelfeldt, Henrik

    2016-04-28

    We apply a three-fold covariance imaging method to analyse previously acquired data [C. S. Slater et al., Phys. Rev. A 89, 011401(R) (2014)] on the femtosecond laser-induced Coulomb explosion of spatially pre-aligned 3,5-dibromo-3',5'-difluoro-4'-cyanobiphenyl molecules. The data were acquired using the "Pixel Imaging Mass Spectrometry" camera. We show how three-fold covariance imaging of ionic photofragment recoil trajectories can be used to provide new information about the parent ion's molecular structure prior to its Coulomb explosion. In particular, we show how the analysis may be used to obtain information about molecular conformation and provide an alternative route for enantiomer determination.

  4. A regulatory perspective on missing data in the aftermath of the NRC report.

    PubMed

    LaVange, Lisa M; Permutt, Thomas

    2016-07-30

    The issuance of a report in 2010 by the National Research Council (NRC) of the National Academy of Sciences entitled 'The Prevention and Treatment of Missing Data in Clinical Trials,' commissioned by the US Food and Drug Administration, had an immediate impact on the way that statisticians and clinical researchers in both industry and regulatory agencies think about the missing data problem. We believe that there is currently great potential to improve study quality and interpretability-by reducing the amount of missing data through changes in trial design and conduct and by planning and conducting analyses that better account for the missing information. Here, we describe our view on some of the recommendations in the report and suggest ways in which these recommendations can be incorporated into new or ongoing clinical trials in order to improve their chance of success. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  5. ICT and Pedagogy: Opportunities Missed?

    ERIC Educational Resources Information Center

    Adams, Paul

    2011-01-01

    The pace of Information and Communications Technology (ICT) development necessitates radical and rapid change for education. Given the English prevalence for an economically determinist orientation for educational outcomes, it seems pertinent to ask how learning in relation to ICT is to be conceptualised. Accepting the view that education needs to…

  6. Family Learning: The Missing Exemplar

    ERIC Educational Resources Information Center

    Dentzau, Michael W.

    2013-01-01

    As a supporter of informal and alternative learning environments for science learning I am pleased to add to the discussion generated by Adriana Briseno-Garzon's article, "More than science: family learning in a Mexican science museum". I am keenly aware of the value of active family involvement in education in general, and science education in…

  7. Feature Selection with Missing Data

    ERIC Educational Resources Information Center

    Sarkar, Saurabh

    2013-01-01

    In the modern world information has become the new power. An increasing amount of efforts are being made to gather data, resources being allocated, time being invested and tools being developed. Data collection is no longer a myth; however, it remains a great challenge to create value out of the enormous data that is being collected. Data modeling…

  8. Planned Missing Data Designs in Educational Psychology Research

    ERIC Educational Resources Information Center

    Rhemtulla, Mijke; Hancock, Gregory R.

    2016-01-01

    Although missing data are often viewed as a challenge for applied researchers, in fact missing data can be highly beneficial. Specifically, when the amount of missing data on specific variables is carefully controlled, a balance can be struck between statistical power and research costs. This article presents the issue of planned missing data by…

  9. 40 CFR 98.445 - Procedures for estimating missing data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... following missing data procedures: (a) A quarterly flow rate of CO2 received that is missing must be...) A quarterly CO2 concentration of a CO2 stream received that is missing must be estimated as follows... quantity of CO2 injected that is missing must be estimated using a representative quantity of CO2...

  10. 40 CFR 98.445 - Procedures for estimating missing data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... following missing data procedures: (a) A quarterly flow rate of CO2 received that is missing must be...) A quarterly CO2 concentration of a CO2 stream received that is missing must be estimated as follows... quantity of CO2 injected that is missing must be estimated using a representative quantity of CO2...

  11. 40 CFR 98.445 - Procedures for estimating missing data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... following missing data procedures: (a) A quarterly flow rate of CO2 received that is missing must be...) A quarterly CO2 concentration of a CO2 stream received that is missing must be estimated as follows... quantity of CO2 injected that is missing must be estimated using a representative quantity of CO2...

  12. 40 CFR 98.295 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For the emission calculation methodologies in § 98.293(b)(2) and (b)(3), a complete... procedures used for all such missing value estimates. (a) For each missing value of the weekly composite...

  13. 40 CFR 98.385 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. You must follow the procedures for estimating missing data in § 98... estimating missing data for petroleum products in § 98.395 also applies to coal-to-liquid products....

  14. 40 CFR 98.445 - Procedures for estimating missing data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... following missing data procedures: (a) A quarterly flow rate of CO2 received that is missing must be...) A quarterly CO2 concentration of a CO2 stream received that is missing must be estimated as follows... quantity of CO2 injected that is missing must be estimated using a representative quantity of CO2...

  15. Exploitation of Geometric Occlusion and Covariance Spectroscopy in a Gamma Sensor Array

    SciTech Connect

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Mitchell, Stephen; Guss, Paul; Trainham, Clifford

    2013-09-01

    The National Security Technologies, LLC, Remote Sensing Laboratory has recently used an array of six small-footprint (1-inch diameter by 3-inch long) cylindrical crystals of thallium-doped sodium iodide scintillators to obtain angular information from discrete gamma ray–emitting point sources. Obtaining angular information in a near-field measurement for a field-deployed gamma sensor is a requirement for radiological emergency work. Three of the sensors sit at the vertices of a 2-inch isosceles triangle, while the other three sit on the circumference of a 3-inch-radius circle centered in this triangle. This configuration exploits occlusion of sensors, correlation from Compton scattering within a detector array, and covariance spectroscopy, a spectral coincidence technique. Careful placement and orientation of individual detectors with reference to other detectors in an array can provide improved angular resolution for determining the source position by occlusion mechanism. By evaluating the values of, and the uncertainties in, the photopeak areas, efficiencies, branching ratio, peak area correction factors, and the correlations between these quantities, one can determine the precise activity of a particular radioisotope from a mixture of radioisotopes that have overlapping photopeaks that are ordinarily hard to deconvolve. The spectral coincidence technique, often known as covariance spectroscopy, examines the correlations and fluctuations in data that contain valuable information about radiation sources, transport media, and detection systems. Covariance spectroscopy enhances radionuclide identification techniques, provides directional information, and makes weaker gamma-ray emission—normally undetectable by common spectroscopic analysis—detectable. A series of experimental results using the concept of covariance spectroscopy are presented.

  16. Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins

    NASA Astrophysics Data System (ADS)

    Tolwinski-Ward, S. E.; Wang, D.

    2015-12-01

    Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.

  17. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis

    NASA Astrophysics Data System (ADS)

    Plis, Sergey M.; George, J. S.; Jun, S. C.; Paré-Blagoev, J.; Ranken, D. M.; Wood, C. C.; Schmidt, D. M.

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  18. Structural covariance networks in the mouse brain.

    PubMed

    Pagani, Marco; Bifone, Angelo; Gozzi, Alessandro

    2016-04-01

    The presence of networks of correlation between regional gray matter volume as measured across subjects in a group of individuals has been consistently described in several human studies, an approach termed structural covariance MRI (scMRI). Complementary to prevalent brain mapping modalities like functional and diffusion-weighted imaging, the approach can provide precious insights into the mutual influence of trophic and plastic processes in health and pathological states. To investigate whether analogous scMRI networks are present in lower mammal species amenable to genetic and experimental manipulation such as the laboratory mouse, we employed high resolution morphoanatomical MRI in a large cohort of genetically-homogeneous wild-type mice (C57Bl6/J) and mapped scMRI networks using a seed-based approach. We show that the mouse brain exhibits robust homotopic scMRI networks in both primary and associative cortices, a finding corroborated by independent component analyses of cortical volumes. Subcortical structures also showed highly symmetric inter-hemispheric correlations, with evidence of distributed antero-posterior networks in diencephalic regions of the thalamus and hypothalamus. Hierarchical cluster analysis revealed six identifiable clusters of cortical and sub-cortical regions corresponding to previously described neuroanatomical systems. Our work documents the presence of homotopic cortical and subcortical scMRI networks in the mouse brain, thus supporting the use of this species to investigate the elusive biological and neuroanatomical underpinnings of scMRI network development and its derangement in neuropathological states. The identification of scMRI networks in genetically homogeneous inbred mice is consistent with the emerging view of a key role of environmental factors in shaping these correlational networks.

  19. Inflation in general covariant theory of gravity

    SciTech Connect

    Huang, Yongqing; Wang, Anzhong; Wu, Qiang E-mail: anzhong_wang@baylor.edu

    2012-10-01

    In this paper, we study inflation in the framework of the nonrelativistic general covariant theory of the Hořava-Lifshitz gravity with the projectability condition and an arbitrary coupling constant λ. We find that the Friedmann-Robterson-Walker (FRW) universe is necessarily flat in such a setup. We work out explicitly the linear perturbations of the flat FRW universe without specifying to a particular gauge, and find that the perturbations are different from those obtained in general relativity, because of the presence of the high-order spatial derivative terms. Applying the general formulas to a single scalar field, we show that in the sub-horizon regions, the metric and scalar field are tightly coupled and have the same oscillating frequencies. In the super-horizon regions, the perturbations become adiabatic, and the comoving curvature perturbation is constant. We also calculate the power spectra and indices of both the scalar and tensor perturbations, and express them explicitly in terms of the slow roll parameters and the coupling constants of the high-order spatial derivative terms. In particular, we find that the perturbations, of both scalar and tensor, are almost scale-invariant, and, with some reasonable assumptions on the coupling coefficients, the spectrum index of the tensor perturbation is the same as that given in the minimum scenario in general relativity (GR), whereas the index for scalar perturbation in general depends on λ and is different from the standard GR value. The ratio of the scalar and tensor power spectra depends on the high-order spatial derivative terms, and can be different from that of GR significantly.

  20. MISSE PEACE Polymers Atomic Oxygen Erosion Results

    NASA Technical Reports Server (NTRS)

    deGroh, Kim, K.; Banks, Bruce A.; McCarthy, Catherine E.; Rucker, Rochelle N.; Roberts, Lily M.; Berger, Lauren A.

    2006-01-01

    Forty-one different polymer samples, collectively called the Polymer Erosion and Contamination Experiment (PEACE) Polymers, have been exposed to the low Earth orbit (LEO) environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of Materials International Space Station Experiment 2 (MISSE 2). The objective of the PEACE Polymers experiment was to determine the atomic oxygen erosion yield of a wide variety of polymeric materials after long term exposure to the space environment. The polymers range from those commonly used for spacecraft applications, such as Teflon (DuPont) FEP, to more recently developed polymers, such as high temperature polyimide PMR (polymerization of monomer reactants). Additional polymers were included to explore erosion yield dependence upon chemical composition. The MISSE PEACE Polymers experiment was flown in MISSE Passive Experiment Carrier 2 (PEC 2), tray 1, on the exterior of the ISS Quest Airlock and was exposed to atomic oxygen along with solar and charged particle radiation. MISSE 2 was successfully retrieved during a space walk on July 30, 2005, during Discovery s STS-114 Return to Flight mission. Details on the specific polymers flown, flight sample fabrication, pre-flight and post-flight characterization techniques, and atomic oxygen fluence calculations are discussed along with a summary of the atomic oxygen erosion yield results. The MISSE 2 PEACE Polymers experiment is unique because it has the widest variety of polymers flown in LEO for a long duration and provides extremely valuable erosion yield data for spacecraft design purposes.

  1. Family learning: the missing exemplar

    NASA Astrophysics Data System (ADS)

    Dentzau, Michael W.

    2013-06-01

    As a supporter of informal and alternative learning environments for science learning I am pleased to add to the discussion generated by Adriana Briseño-Garzón's article, "More than science: family learning in a Mexican science museum". I am keenly aware of the value of active family involvement in education in general, and science education in particular, and the portrait provided from a Mexican science museum adds to the literature of informal education through a specific sociocultural lens. I add, however, that while acknowledging the powerful the role of family in Latin American culture, the issue transcends these confines and is instead a cross-cutting topic within education as a whole. I also discuss the ease at which in an effort to call attention to cultural differences one can, by the very act, unintentionally marginalize others.

  2. Hawking radiation, covariant boundary conditions, and vacuum states

    SciTech Connect

    Banerjee, Rabin; Kulkarni, Shailesh

    2009-04-15

    The basic characteristics of the covariant chiral current and the covariant chiral energy-momentum tensor are obtained from a chiral effective action. These results are used to justify the covariant boundary condition used in recent approaches of computing the Hawking flux from chiral gauge and gravitational anomalies. We also discuss a connection of our results with the conventional calculation of nonchiral currents and stress tensors in different (Unruh, Hartle-Hawking and Boulware) states.

  3. The importance of covariance in nuclear data uncertainty propagation studies

    SciTech Connect

    Benstead, J.

    2012-07-01

    A study has been undertaken to investigate what proportion of the uncertainty propagated through plutonium critical assembly calculations is due to the covariances between the fission cross section in different neutron energy groups. The uncertainties on k{sub eff} calculated show that the presence of covariances between the cross section in different neutron energy groups accounts for approximately 27-37% of the propagated uncertainty due to the plutonium fission cross section. This study also confirmed the validity of employing the sandwich equation, with associated sensitivity and covariance data, instead of a Monte Carlo sampling approach to calculating uncertainties for linearly varying systems. (authors)

  4. Tensor based missing traffic data completion with spatial-temporal correlation

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Tan, Huachun; Wu, Yuankai; Jin, Peter J.

    2016-03-01

    Missing and suspicious traffic data is a major problem for intelligent transportation system, which adversely affects a diverse variety of transportation applications. Several missing traffic data imputation methods had been proposed in the last decade. It is still an open problem of how to make full use of spatial information from upstream/downstream detectors to improve imputing performance. In this paper, a tensor based method considering the full spatial-temporal information of traffic flow, is proposed to fuse the traffic flow data from multiple detecting locations. The traffic flow data is reconstructed in a 4-way tensor pattern, and the low-n-rank tensor completion algorithm is applied to impute missing data. This novel approach not only fully utilizes the spatial information from neighboring locations, but also can impute missing data in different locations under a unified framework. Experiments demonstrate that the proposed method achieves a better imputation performance than the method without spatial information. The experimental results show that the proposed method can address the extreme case where the data of a long period of one or several weeks are completely missing.

  5. MISSE 5 Thin Films Space Exposure Experiment

    NASA Technical Reports Server (NTRS)

    Harvey, Gale A.; Kinard, William H.; Jones, James L.

    2007-01-01

    The Materials International Space Station Experiment (MISSE) is a set of space exposure experiments using the International Space Station (ISS) as the flight platform. MISSE 5 is a co-operative endeavor by NASA-LaRC, United Stated Naval Academy, Naval Center for Space Technology (NCST), NASA-GRC, NASA-MSFC, Boeing, AZ Technology, MURE, and Team Cooperative. The primary experiment is performance measurement and monitoring of high performance solar cells for U.S. Navy research and development. A secondary experiment is the telemetry of this data to ground stations. A third experiment is the measurement of low-Earth-orbit (LEO) low-Sun-exposure space effects on thin film materials. Thin films can provide extremely efficacious thermal control, designation, and propulsion functions in space to name a few applications. Solar ultraviolet radiation and atomic oxygen are major degradation mechanisms in LEO. This paper is an engineering report of the MISSE 5 thm films 13 months space exposure experiment.

  6. Scalable tensor factorizations with missing data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  7. Near-misses and future disaster preparedness.

    PubMed

    Dillon, Robin L; Tinsley, Catherine H; Burns, William J

    2014-10-01

    Disasters garner attention when they occur, and organizations commonly extract valuable lessons from visible failures, adopting new behaviors in response. For example, the United States saw numerous security policy changes following the September 11 terrorist attacks and emergency management and shelter policy changes following Hurricane Katrina. But what about those events that occur that fall short of disaster? Research that examines prior hazard experience shows that this experience can be a mixed blessing. Prior experience can stimulate protective measures, but sometimes prior experience can deceive people into feeling an unwarranted sense of safety. This research focuses on how people interpret near-miss experiences. We demonstrate that when near-misses are interpreted as disasters that did not occur and thus provide the perception that the system is resilient to the hazard, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions. On the other hand, if near-misses can be recognized and interpreted as disasters that almost happened and thus provide the perception that the system is vulnerable to the hazard, this will counter the basic "near-miss" effect and encourage mitigation. In this article, we use these distinctions between resilient and vulnerable near-misses to examine how people come to define an event as either a resilient or vulnerable near-miss, as well as how this interpretation influences their perceptions of risk and their future preparedness behavior. Our contribution is in highlighting the critical role that people's interpretation of the prior experience has on their subsequent behavior and in measuring what shapes this interpretation. PMID:24773610

  8. Ocean spectral data assimilation without background error covariance matrix

    NASA Astrophysics Data System (ADS)

    Chu, Peter C.; Fan, Chenwu; Margolina, Tetyana

    2016-08-01

    Predetermination of background error covariance matrix B is challenging in existing ocean data assimilation schemes such as the optimal interpolation (OI). An optimal spectral decomposition (OSD) has been developed to overcome such difficulty without using the B matrix. The basis functions are eigenvectors of the horizontal Laplacian operator, pre-calculated on the base of ocean topography, and independent on any observational data and background fields. Minimization of analysis error variance is achieved by optimal selection of the spectral coefficients. Optimal mode truncation is dependent on the observational data and observational error variance and determined using the steep-descending method. Analytical 2D fields of large and small mesoscale eddies with white Gaussian noises inside a domain with four rigid and curved boundaries are used to demonstrate the capability of the OSD method. The overall error reduction using the OSD is evident in comparison to the OI scheme. Synoptic monthly gridded world ocean temperature, salinity, and absolute geostrophic velocity datasets produced with the OSD method and quality controlled by the NOAA National Centers for Environmental Information (NCEI) are also presented.

  9. Mutually unbiased bases as minimal Clifford covariant 2-designs

    NASA Astrophysics Data System (ADS)

    Zhu, Huangjun

    2015-06-01

    Mutually unbiased bases (MUBs) are interesting for various reasons. The most attractive example of (a complete set of) MUBs is the one constructed by Ivanović as well as Wootters and Fields, which is referred to as the canonical MUB. Nevertheless, little is known about anything that is unique to this MUB. We show that the canonical MUB in any prime power dimension is uniquely determined by an extremal orbit of the (restricted) Clifford group except in dimension 3, in which case the orbit defines a special symmetric informationally complete measurement (SIC), known as the Hesse SIC. Here the extremal orbit is the orbit with the smallest number of pure states. Quite surprisingly, this characterization does not rely on any concept that is related to bases or unbiasedness. As a corollary, the canonical MUB is the unique minimal 2-design covariant with respect to the Clifford group except in dimension 3. In addition, these MUBs provide an infinite family of highly symmetric frames and positive-operator-valued measures (POVMs), which are of independent interest.

  10. Effortless assignment with 4D covariance sequential correlation maps.

    PubMed

    Harden, Bradley J; Mishra, Subrata H; Frueh, Dominique P

    2015-11-01

    Traditional Nuclear Magnetic Resonance (NMR) assignment procedures for proteins rely on preliminary peak-picking to identify and label NMR signals. However, such an approach has severe limitations when signals are erroneously labeled or completely neglected. The consequences are especially grave for proteins with substantial peak overlap, and mistakes can often thwart entire projects. To overcome these limitations, we previously introduced an assignment technique that bypasses traditional pick peaking altogether. Covariance Sequential Correlation Maps (COSCOMs) transform the indirect connectivity information provided by multiple 3D backbone spectra into direct (H, N) to (H, N) correlations. Here, we present an updated method that utilizes a single four-dimensional spectrum rather than a suite of three-dimensional spectra. We demonstrate the advantages of 4D-COSCOMs relative to their 3D counterparts. We introduce improvements accelerating their calculation. We discuss practical considerations affecting their quality. And finally we showcase their utility in the context of a 52 kDa cyclization domain from a non-ribosomal peptide synthetase.

  11. Covariant Spectator Theory of np scattering: Isoscalar interaction currents

    SciTech Connect

    Gross, Franz L.

    2014-06-01

    Using the Covariant Spectator Theory (CST), one boson exchange (OBE) models have been found that give precision fits to low energy $np$ scattering and the deuteron binding energy. The boson-nucleon vertices used in these models contain a momentum dependence that requires a new class of interaction currents for use with electromagnetic interactions. Current conservation requires that these new interaction currents satisfy a two-body Ward-Takahashi (WT), and using principals of {\\it simplicity\\/} and {\\it picture independence\\/}, these currents can be uniquely determined. The results lead to general formulae for a two-body current that can be expressed in terms of relativistic $np$ wave functions, ${\\it \\Psi}$, and two convenient truncated wave functions, ${\\it \\Psi}^{(2)}$ and $\\widehat {\\it \\Psi}$, which contain all of the information needed for the explicit evaluation of the contributions from the interaction current. These three wave functions can be calculated from the CST bound or scattering state equations (and their off-shell extrapolations). A companion paper uses this formalism to evaluate the deuteron magnetic moment.

  12. Ocean spectral data assimilation without background error covariance matrix

    NASA Astrophysics Data System (ADS)

    Chu, Peter C.; Fan, Chenwu; Margolina, Tetyana

    2016-09-01

    Predetermination of background error covariance matrix B is challenging in existing ocean data assimilation schemes such as the optimal interpolation (OI). An optimal spectral decomposition (OSD) has been developed to overcome such difficulty without using the B matrix. The basis functions are eigenvectors of the horizontal Laplacian operator, pre-calculated on the base of ocean topography, and independent on any observational data and background fields. Minimization of analysis error variance is achieved by optimal selection of the spectral coefficients. Optimal mode truncation is dependent on the observational data and observational error variance and determined using the steep-descending method. Analytical 2D fields of large and small mesoscale eddies with white Gaussian noises inside a domain with four rigid and curved boundaries are used to demonstrate the capability of the OSD method. The overall error reduction using the OSD is evident in comparison to the OI scheme. Synoptic monthly gridded world ocean temperature, salinity, and absolute geostrophic velocity datasets produced with the OSD method and quality controlled by the NOAA National Centers for Environmental Information (NCEI) are also presented.

  13. Application of Covariance Data to Criticality Safety Data Validation

    SciTech Connect

    Broadhead, B.L.; Hopper, C.M.; Parks, C.V.

    1999-11-13

    The use of cross-section covariance data has long been a key part of traditional sensitivity and uncertainty analyses (S/U). This paper presents the application of S/U methodologies to the data validation tasks of a criticality safety computational study. The S/U methods presented are designed to provide a formal means of establishing the area (or range) of applicability for criticality safety data validation studies. The goal of this work is to develop parameters that can be used to formally determine the ''similarity'' of a benchmark experiment (or a set of benchmark experiments individually) and the application area that is to be validated. These parameters are termed D parameters, which represent the differences by energy group of S/U-generated sensitivity profiles, and ck parameters, which are the correlation coefficients, each of which gives information relative to the similarity between pairs of selected systems. The application of a Generalized Linear Least-Squares Methodology ( GLLSM) tool to criticality safety validation tasks is also described in this paper. These methods and guidelines are also applied to a sample validation for uranium systems with enrichments greater than 5 wt %.

  14. Effortless assignment with 4D covariance sequential correlation maps

    NASA Astrophysics Data System (ADS)

    Harden, Bradley J.; Mishra, Subrata H.; Frueh, Dominique P.

    2015-11-01

    Traditional Nuclear Magnetic Resonance (NMR) assignment procedures for proteins rely on preliminary peak-picking to identify and label NMR signals. However, such an approach has severe limitations when signals are erroneously labeled or completely neglected. The consequences are especially grave for proteins with substantial peak overlap, and mistakes can often thwart entire projects. To overcome these limitations, we previously introduced an assignment technique that bypasses traditional pick peaking altogether. Covariance Sequential Correlation Maps (COSCOMs) transform the indirect connectivity information provided by multiple 3D backbone spectra into direct (H, N) to (H, N) correlations. Here, we present an updated method that utilizes a single four-dimensional spectrum rather than a suite of three-dimensional spectra. We demonstrate the advantages of 4D-COSCOMs relative to their 3D counterparts. We introduce improvements accelerating their calculation. We discuss practical considerations affecting their quality. And finally we showcase their utility in the context of a 52 kDa cyclization domain from a non-ribosomal peptide synthetase.

  15. PhyloPars: estimation of missing parameter values using phylogeny.

    PubMed

    Bruggeman, Jorn; Heringa, Jaap; Brandt, Bernd W

    2009-07-01

    A wealth of information on metabolic parameters of a species can be inferred from observations on species that are phylogenetically related. Phylogeny-based information can complement direct empirical evidence, and is particularly valuable if experiments on the species of interest are not feasible. The PhyloPars web server provides a statistically consistent method that combines an incomplete set of empirical observations with the species phylogeny to produce a complete set of parameter estimates for all species. It builds upon a state-of-the-art evolutionary model, extended with the ability to handle missing data. The resulting approach makes optimal use of all available information to produce estimates that can be an order of magnitude more accurate than ad-hoc alternatives. Uploading a phylogeny and incomplete feature matrix suffices to obtain estimates of all missing values, along with a measure of certainty. Real-time cross-validation provides further insight in the accuracy and bias expected for estimated values. The server allows for easy, efficient estimation of metabolic parameters, which can benefit a wide range of fields including systems biology and ecology. PhyloPars is available at: http://www.ibi.vu.nl/programs/phylopars/.

  16. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  17. Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices

    PubMed Central

    Cai, Tony; Ma, Zongming; Wu, Yihong

    2014-01-01

    This paper considers a sparse spiked covariancematrix model in the high-dimensional setting and studies the minimax estimation of the covariance matrix and the principal subspace as well as the minimax rank detection. The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariance matrices. We also establish the minimax rate under the spectral norm for estimating the principal subspace, the primary object of interest in principal component analysis. In addition, the optimal rate for the rank detection boundary is obtained. This result also resolves the gap in a recent paper by Berthet and Rigollet [2] where the special case of rank one is considered. PMID:26257453

  18. Progress of Covariance Evaluation at the China Nuclear Data Center

    SciTech Connect

    Xu, R.; Zhang, Q.; Zhang, Y.; Liu, T.; Ge, Z.; Lu, H.; Sun, Z.; Yu, B.; Tang, G.

    2015-01-15

    Covariance evaluations at the China Nuclear Data Center focus on the cross sections of structural materials and actinides in the fast neutron energy range. In addition to the well-known Least-squares approach, a method based on the analysis of the sources of experimental uncertainties is especially introduced to generate a covariance matrix for a particular reaction for which multiple measurements are available. The scheme of the covariance evaluation flow is presented, and an example of n+{sup 90}Zr is given to illuminate the whole procedure. It is proven that the accuracy of measurements can be properly incorporated into the covariance and the long-standing small uncertainty problem can be avoided.

  19. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Harman, R. R.

    1990-01-01

    This paper presents a covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft. The linearized dynamics and measurement equations of the error states are used in formulating the 'truth model' describing the real behavior of the systems involved. The 'design model' used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A 'true covariance analysis' has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  20. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  1. Nonlinear effects in the correlation of tracks and covariance propagation

    NASA Astrophysics Data System (ADS)

    Sabol, C.; Hill, K.; Alfriend, K.; Sukut, T.

    2013-03-01

    Even though there are methods for the nonlinear propagation of the covariance the propagation of the covariance in current operational programs is based on the state transition matrix of the 1st variational equations, thus it is a linear propagation. If the measurement errors are zero mean Gaussian, the orbit errors, statistically represented by the covariance, are Gaussian. When the orbit errors become too large they are no longer Gaussian and not represented by the covariance. One use of the covariance is the association of uncorrelated tracks (UCTs). A UCT is an object tracked by a space surveillance system that does not correlate to another object in the space object data base. For an object to be entered into the data base three or more tracks must be correlated. Associating UCTs is a major challenge for a space surveillance system since every object entered into the space object catalog begins as a UCT. It has been proved that if the orbit errors are Gaussian, the error ellipsoid represented by the covariance is the optimum association volume. When the time between tracks becomes large, hours or even days, the orbit errors can become large and are no longer Gaussian, and this has a negative effect on the association of UCTs. This paper further investigates the nonlinear effects on the accuracy of the covariance for use in correlation. The use of the best coordinate system and the unscented Kalman Filter (UKF) for providing a more accurate covariance are investigated along with assessing how these approaches would result in the ability to correlate tracks that are further separated in time.

  2. Improving maternity care in Ethiopia through facility based review of maternal deaths and near misses.

    PubMed

    Gebrehiwot, Yirgu; Tewolde, Birukkidus T

    2014-10-01

    The present study aimed to initiate facility based review of maternal deaths and near misses as part of the Ethiopian effort to reduce maternal mortality and achieve United Nations Millennium Development Goals 4 and 5. An in-depth review of all maternal deaths and near misses among women who visited 10 hospitals in four regions of Ethiopia was conducted between May 2011 and October 2012 as part of the FIGO LOGIC initiative. During the study period, a total of 2774 cases (206 deaths and 2568 near misses) were reviewed. The ratio of maternal deaths to near misses was 1:12 and the overall maternal death rate was 728 per 100 000 live births. Socioeconomic factors associated with maternal mortality included illiteracy 1672 (60.3%) and lack of employment outside the home 2098 (75.6%). In all, 1946 (70.2%) women arrived at hospital after they had developed serious complications owing to issues such as lack of transportation. Only 1223 (44.1%) women received prenatal follow-up and 157 (76.2%) deaths were attributed to direct obstetric causes. Based on the findings, facilities adopted a number of quality improvement measures such as providing 24-hour services, and making ambulances available. Integrating review of maternal deaths and near misses into regular practice provides accurate information on causes of maternal deaths and near misses and also improves quality of care in facilities.

  3. A guide to handling missing data in cost-effectiveness analysis conducted within randomised controlled trials.

    PubMed

    Faria, Rita; Gomes, Manuel; Epstein, David; White, Ian R

    2014-12-01

    Missing data are a frequent problem in cost-effectiveness analysis (CEA) within a randomised controlled trial. Inappropriate methods to handle missing data can lead to misleading results and ultimately can affect the decision of whether an intervention is good value for money. This article provides practical guidance on how to handle missing data in within-trial CEAs following a principled approach: (i) the analysis should be based on a plausible assumption for the missing data mechanism, i.e. whether the probability that data are missing is independent of or dependent on the observed and/or unobserved values; (ii) the method chosen for the base-case should fit with the assumed mechanism; and (iii) sensitivity analysis should be conducted to explore to what extent the results change with the assumption made. This approach is implemented in three stages, which are described in detail: (1) descriptive analysis to inform the assumption on the missing data mechanism; (2) how to choose between alternative methods given their underlying assumptions; and (3) methods for sensitivity analysis. The case study illustrates how to apply this approach in practice, including software code. The article concludes with recommendations for practice and suggestions for future research.

  4. Structural Effects of Network Sampling Coverage I: Nodes Missing at Random1

    PubMed Central

    Smith, Jeffrey A.; Moody, James

    2013-01-01

    Network measures assume a census of a well-bounded population. This level of coverage is rarely achieved in practice, however, and we have only limited information on the robustness of network measures to incomplete coverage. This paper examines the effect of node-level missingness on 4 classes of network measures: centrality, centralization, topology and homophily across a diverse sample of 12 empirical networks. We use a Monte Carlo simulation process to generate data with known levels of missingness and compare the resulting network scores to their known starting values. As with past studies (Borgatti et al 2006; Kossinets 2006), we find that measurement bias generally increases with more missing data. The exact rate and nature of this increase, however, varies systematically across network measures. For example, betweenness and Bonacich centralization are quite sensitive to missing data while closeness and in-degree are robust. Similarly, while the tau statistic and distance are difficult to capture with missing data, transitivity shows little bias even with very high levels of missingness. The results are also clearly dependent on the features of the network. Larger, more centralized networks are generally more robust to missing data, but this is especially true for centrality and centralization measures. More cohesive networks are robust to missing data when measuring topological features but not when measuring centralization. Overall, the results suggest that missing data may have quite large or quite small effects on network measurement, depending on the type of network and the question being posed. PMID:24311893

  5. Bayesian latent structure models with space-time-dependent covariates.

    PubMed

    Cai, Bo; Lawson, Andrew B; Hossain, Md Monir; Choi, Jungsoon

    2012-04-01

    Spatial-temporal data requires flexible regression models which can model the dependence of responses on space- and time-dependent covariates. In this paper, we describe a semiparametric space-time model from a Bayesian perspective. Nonlinear time dependence of covariates and the interactions among the covariates are constructed by local linear and piecewise linear models, allowing for more flexible orientation and position of the covariate plane by using time-varying basis functions. Space-varying covariate linkage coefficients are also incorporated to allow for the variation of space structures across the geographical location. The formulation accommodates uncertainty in the number and locations of the piecewise basis functions to characterize the global effects, spatially structured and unstructured random effects in relation to covariates. The proposed approach relies on variable selection-type mixture priors for uncertainty in the number and locations of basis functions and in the space-varying linkage coefficients. A simulation example is presented to evaluate the performance of the proposed approach with the competing models. A real data example is used for illustration.

  6. Summary of the Workshop on Neutron Cross Section Covariances

    SciTech Connect

    Smith, Donald L.

    2008-12-15

    A Workshop on Neutron Cross Section Covariances was held from June 24-27, 2008, in Port Jefferson, New York. This Workshop was organized by the National Nuclear Data Center, Brookhaven National Laboratory, to provide a forum for reporting on the status of the growing field of neutron cross section covariances for applications and for discussing future directions of the work in this field. The Workshop focused on the following four major topical areas: covariance methodology, recent covariance evaluations, covariance applications, and user perspectives. Attention was given to the entire spectrum of neutron cross section covariance concerns ranging from light nuclei to the actinides, and from the thermal energy region to 20 MeV. The papers presented at this conference explored topics ranging from fundamental nuclear physics concerns to very specific applications in advanced reactor design and nuclear criticality safety. This paper provides a summary of this workshop. Brief comments on the highlights of each Workshop contribution are provided. In addition, a perspective on the achievements and shortcomings of the Workshop as well as on the future direction of research in this field is offered.

  7. A three domain covariance framework for EEG/MEG data.

    PubMed

    Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C

    2015-10-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets.

  8. Covariance fitting of highly-correlated data in lattice QCD

    NASA Astrophysics Data System (ADS)

    Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong

    2013-07-01

    We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.

  9. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  10. The Performance Analysis Based on SAR Sample Covariance Matrix

    PubMed Central

    Erten, Esra

    2012-01-01

    Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976

  11. DOT and SLA stationary and time-varying analytical covariance functions for LSC-based heterogeneous data combination

    NASA Astrophysics Data System (ADS)

    Vergos, Georgios S.; Natsiopoulos, Dimitrios A.; Tziavos, Ilias N.; Grigoriadis, Vassilios N.; Tzanou, Eleni A.

    2014-05-01

    With the availability of an abundance of earth observation data from satellite altimetry missions as well as those from the GOCE satellite, monitoring of the sea level variations and the determination of functionals of the Earth's gravity field are gaining increased importance. One of the main issues of heterogeneous data combination with stochastic methods is the availability of appropriate data and error covariance and cross-covariance matrices. The latter needs to be determined for all input data within a LSC-based combination scheme based on some analytical global covariance function models, which interconnect observations and signals to be predicted. Given the availability of altimetric sea surface heights, GOCE observations of the second-order derivatives of the Earth's potential, geoid height variations from GRACE and marine gravity anomalies, one can employ all such available information within LSC to estimate the mean dynamic ocean topography (DOT) as well as its dynamic, i.e., time-varying part. In this work, we present some analytical covariance function models for the DOT in the Mediterranean Sea based on empirical values from altimetry- and GOCE-derived DOT. Various options for the analytical models are tested, from exponential to the well-known Gauss-Markov ones, along with a model similar to the Tscherning and Rapp model for the Earth's gravity field. All available covariance function model choices are tested within a LSC-based prediction scheme in order to conclude on the one that provides the most rigorous results in terms of prediction error. Moreover, modifications of the standard stationary covariance functions are investigated in order to determine time-varying analytical models which are used to model the sea level anomaly (SLA) and DOT variability within the entire Mediterranean Basin. The analysis is carried out over a period of 5 years (2008-2013), during which Jason-2 SLA data are employed in order to derive analytical covariance functions

  12. Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.

    PubMed

    George, Brandon; Aban, Inmaculada

    2015-01-15

    Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects.

  13. Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data

    PubMed Central

    George, Brandon; Aban, Inmaculada

    2014-01-01

    Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361

  14. Protein remote homology detection based on auto-cross covariance transformation.

    PubMed

    Liu, Xuan; Zhao, Lijie; Dong, Qiwen

    2011-08-01

    Protein remote homology detection is a critical step toward annotating its structure and function. Supervised learning algorithms such as support vector machine are currently the most accurate methods. The position-specific score matrices (PSSMs) contain wealthy information about the evolutionary relationship of proteins. However, the PSSMs often have different lengths, which are difficult to be used by machine-learning methods. In this study, a simple, fast and powerful method is presented for protein remote homology detection, which combines support vector machine with auto-cross covariance transformation. The PSSMs are converted into a series of fixed-length vectors by auto-cross covariance transformation and these vectors are then input to a support vector machine classifier for remote homology detection. The sequence-order effects can be effectively captured by this scheme. Experiments are performed on well-established datasets, and the remote homology is simulated at the superfamily and the fold level, respectively. The results show that the proposed method, referred to as ACCRe, is comparable or even better than the state-of-the-art methods in terms of detection performance, and its time complexity is superior to those of other profile-based SVM methods. The auto-cross covariance transformation provides a novel way for the usage of evolutionary information, which can be widely used for protein-level studies. PMID:21664609

  15. Area group: an example of style and paste compositional covariation in Maya pottery

    SciTech Connect

    Bishop, R.L.; Reents, D.J.; Harbottle, G.; Sayre, E.V.; van Zelst, L.

    1983-06-12

    This paper has addressed aspects of ceramic style and iconography as found in Late Classic Maya ceramic art, including the supplemental perspective afforded by the analysis of ceramic paste. The chemical data provide a means to assess the extent of stylistic-paste compositional covariation. Depending upon the strength of that covariation various inferences may be drawn about craft specialization, exchange and information flow within Maya society. At the least, it provides an empirical means of comparing stylistically similar vessels; and when they are members of a chemically homogeneous group, it permits style to be addressed in terms of its variation. Additionally, compositionally defined site or region specific reference units provide a chemical background against which the non-provenienced vessels may be compared, allowing the whole vessels to be related to the archaelogically recovered fragmentary material. Finally, this multidisciplinary approach has been illustrated by preliminary findings concerning a specific group of polychrome vessels, The Area Group.

  16. Quantifying the Effect of Component Covariances in CMB Extraction from Multi-frequency Data

    NASA Technical Reports Server (NTRS)

    Phillips, Nicholas G.

    2008-01-01

    Linear combination methods provide a global method for component separation of multi-frequency data. We present such a method that allows for consideration of possible covariances between the desired cosmic microwave background signal and various foreground signals that are also present. We also recover information on the foregrounds including the number of foregrounds, their spectra and templates. In all this, the covariances, which we would only expect to vanish 'in the mean' are included as parameters expressing the fundamental uncertainty due to this type of cosmic variance. When we make the reasonable assumption that the CMB is Gaussian, we can compute both a mean recovered CMB map and also an RMS error map, The mean map coincides with WMAP's Internal Linear Combination map.

  17. Directional gamma sensing from covariance processing of inter-detector Compton crosstalk energy asymmetries

    SciTech Connect

    Trainham, R. Tinsley, J.

    2014-06-15

    Energy asymmetry of inter-detector crosstalk from Compton scattering can be exploited to infer the direction to a gamma source. A covariance approach extracts the correlated crosstalk from data streams to estimate matched signals from Compton gammas split over two detectors. On a covariance map the signal appears as an asymmetric cross diagonal band with axes intercepts at the full photo-peak energy of the original gamma. The asymmetry of the crosstalk band can be processed to determine the direction to the radiation source. The technique does not require detector shadowing, masking, or coded apertures, thus sensitivity is not sacrificed to obtain the directional information. An angular precision of better than 1° of arc is possible, and processing of data streams can be done in real time with very modest computing hardware.

  18. Jumbled genomes: missing Apicomplexan synteny.

    PubMed

    DeBarry, Jeremy D; Kissinger, Jessica C

    2011-10-01

    Whole-genome comparisons provide insight into genome evolution by informing on gene repertoires, gene gains/losses, and genome organization. Most of our knowledge about eukaryotic genome evolution is derived from studies of multicellular model organisms. The eukaryotic phylum Apicomplexa contains obligate intracellular protist parasites responsible for a wide range of human and veterinary diseases (e.g., malaria, toxoplasmosis, and theileriosis). We have developed an in silico protein-encoding gene based pipeline to investigate synteny across 12 apicomplexan species from six genera. Genome rearrangement between lineages is extensive. Syntenic regions (conserved gene content and order) are rare between lineages and appear to be totally absent across the phylum, with no group of three genes found on the same chromosome and in the same order within 25 kb up- and downstream of any orthologous genes. Conserved synteny between major lineages is limited to small regions in Plasmodium and Theileria/Babesia species, and within these conserved regions, there are a number of proteins putatively targeted to organelles. The observed overall lack of synteny is surprising considering the divergence times and the apparent absence of transposable elements (TEs) within any of the species examined. TEs are ubiquitous in all other groups of eukaryotes studied to date and have been shown to be involved in genomic rearrangements. It appears that there are different criteria governing genome evolution within the Apicomplexa relative to other well-studied unicellular and multicellular eukaryotes. PMID:21504890

  19. Missing saiga on the taiga.

    PubMed

    Kuhn, Tyler S; Mooers, Arne Ø

    2010-11-01

    Conservation biologists understand that linking demographic histories of species at risk with causal biotic and abiotic events should help us predict the effects of ongoing biotic and abiotic change. In parallel, researchers have started to use ancient genetic information (aDNA) to explore the demographic histories of a number of species present in the Pleistocene fossil record (see, e.g. Shapiro et al. 2004). However, aDNA studies have primarily focused on identifying long-term population trends, linked to climate variability and the role of early human activity. Population trends over more recent time, e.g. during the Holocene, have been poorly explored, partly owing to analytical limitations. In this issue, Campos et al. (2010a) highlight the potential of aDNA to investigate demographic patterns over such recent time periods for the compelling and endangered saiga antelope Saiga tatarica (Fig. 1). The time may come when past and current demography can be combined to produce a seamless record. [Figure: see text].

  20. The Feeling Words Curriculum: The Missing Link.

    ERIC Educational Resources Information Center

    Maurer, Marvin

    The Feeling Words Curriculum, a curriculum that integrates the cognitive and affective domains in one course of study, is described in this paper. The opening sections explain how "feeling words," key vocabulary terms, are used to provide the missing link from one person's life to another's. Stressing the importance of helping students to develop…

  1. The Missing Link in ESL Teacher Training.

    ERIC Educational Resources Information Center

    Justen, Edward F.

    1984-01-01

    Many sincere, well-prepared, and technically qualified teachers of English as a second language (ESL) are awkward in class, stressed, and insecure, showing little excitement or energy. The missing element in training programs for ESL teachers is a good basic course in drama. It is an expression at both the visual and auditory levels, is a medium…

  2. Comparing three gap filling methods for eddy covariance crop evapotranspiration measurements within a hilly agricultural catchment

    NASA Astrophysics Data System (ADS)

    Boudhina, Nissaf; Prévot, Laurent; Zitouna Chebbi, Rim; Mekki, Insaf; Jacob, Frédéric; Ben Mechlia, Netij; Masmoudi, Moncef

    2015-04-01

    Hilly watersheds are widespread throughout coastal areas around the Mediterranean Basin. They experience agricultural intensification since hilly topographies allow water-harvesting techniques that compensate for rainfall storage, water being a strong limiting factor for crop production. Their fragility is likely to increase with climate change and human pressure. Within semi-arid hilly watershed conditions, evapotranspiration (ETR) is a major term of both land surface energy and water balances. Several methods allow determining ETR, based either on direct measurements, or on estimations and forecast from weather and soil moisture data using simulation models. Among these methods, eddy covariance technique is based on high-frequency measurements of fluctuations of wind speed and air temperature / humidity, to directly determine the convective fluxes between land surface and atmosphere. In spite of experimental and instrumental progresses, datasets of eddy covariance measurements often experience large portions of missing data. The latter results from energy power failure, experimental maintenance, instrumental troubles such as krypton hygrometer malfunctioning because of air humidity, or quality assessment based filtering in relation to spatial homogeneity and temporal stationarity of turbulence within surface boundary layer. This last item is all the more important as hilly topography, when combined with strong winds, tends to increase turbulence within surface boundary layer. The main objective of this study is to establish gap-filling procedures to provide complete chronicles of eddy-covariance measurements of crop evapotranspiration (ETR) within a hilly agricultural watershed. We focus on the specific conditions induced by the combination of hilly topography and wind direction, by discriminating between upslope and downslope winds. The experiment was set for three field configurations within hilly conditions: two flux measurement stations (A, B) were installed

  3. Incorrect support and missing center tolerances of phasing algorithms

    DOE PAGESBeta

    Huang, Xiaojing; Nelson, Johanna; Steinbrener, Jan; Kirz, Janos; Turner, Joshua J.; Jacobsen, Chris

    2010-01-01

    In x-ray diffraction microscopy, iterative algorithms retrieve reciprocal space phase information, and a real space image, from an object's coherent diffraction intensities through the use of a priori information such as a finite support constraint. In many experiments, the object's shape or support is not well known, and the diffraction pattern is incompletely measured. We describe here computer simulations to look at the effects of both of these possible errors when using several common reconstruction algorithms. Overly tight object supports prevent successful convergence; however, we show that this can often be recognized through pathological behavior of the phase retrieval transfermore » function. Dynamic range limitations often make it difficult to record the central speckles of the diffraction pattern. We show that this leads to increasing artifacts in the image when the number of missing central speckles exceeds about 10, and that the removal of unconstrained modes from the reconstructed image is helpful only when the number of missing central speckles is less than about 50. In conclusion, this simulation study helps in judging the reconstructability of experimentally recorded coherent diffraction patterns.« less

  4. Incorrect support and missing center tolerances of phasing algorithms

    SciTech Connect

    Huang, Xiaojing; Nelson, Johanna; Steinbrener, Jan; Kirz, Janos; Turner, Joshua J.; Jacobsen, Chris

    2010-01-01

    In x-ray diffraction microscopy, iterative algorithms retrieve reciprocal space phase information, and a real space image, from an object's coherent diffraction intensities through the use of a priori information such as a finite support constraint. In many experiments, the object's shape or support is not well known, and the diffraction pattern is incompletely measured. We describe here computer simulations to look at the effects of both of these possible errors when using several common reconstruction algorithms. Overly tight object supports prevent successful convergence; however, we show that this can often be recognized through pathological behavior of the phase retrieval transfer function. Dynamic range limitations often make it difficult to record the central speckles of the diffraction pattern. We show that this leads to increasing artifacts in the image when the number of missing central speckles exceeds about 10, and that the removal of unconstrained modes from the reconstructed image is helpful only when the number of missing central speckles is less than about 50. In conclusion, this simulation study helps in judging the reconstructability of experimentally recorded coherent diffraction patterns.

  5. Nuclear Data Target Accuracies for Generation-IV Systems Based on the use of New Covariance Data

    SciTech Connect

    G. Palmiotti; M. Salvatores; M. Assawaroongruengchot; M. Herman; P. Oblozinsky; C. Mattoon

    2010-04-01

    A target accuracy assessment using new available covariance data, the AFCI 1.2 covariance data, has been carried out. At the same time, the more theoretical issue of taking into account correlation terms in target accuracy assessment studies has been deeply investigated. The impact of correlation terms is very significant in target accuracy assessment evaluation and can produce very stringent requirements on nuclear data. For this type of study a broader energy group structure should be used, in order to smooth out requirements and provide better feedback information to evaluators and cross section measurement experts. The main difference in results between using BOLNA or AFCI 1.2 covariance data are related to minor actinides, minor Pu isotopes, structural materials (in particular Fe56), and coolant isotopes (Na23) accuracy requirements.

  6. New cyberinfrastructure for studying land-atmosphere interactions using eddy covariance techniques

    NASA Astrophysics Data System (ADS)

    Jaimes, A.; Salayandia, L.; Gallegos, I.; Gates, A. Q.; Tweedie, C.

    2010-12-01

    limitations on ecological instrumentation output that affect data uncertainty. The objective was to parameterize and capture scientific knowledge necessary to typify data quality associated with eddy covariance methods. The process was documented by developing workflow driven ontologies, which can be used to disseminate how the Eddy Covariance Data is being captured and processed at JER, and also to automate the capture of provenance meta-data. Ultimately, we hope to develop scalable Eddy Covariance data capturing systems that offer additional information about how the data was captured, which hopefully will result in data sets with a higher degree of re-usability.

  7. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  8. 40 CFR 98.45 - Procedures for estimating missing data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...

  9. 40 CFR 98.45 - Procedures for estimating missing data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...

  10. 40 CFR 98.45 - Procedures for estimating missing data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...

  11. 40 CFR 98.45 - Procedures for estimating missing data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.45 Procedures for estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...

  12. 40 CFR 98.75 - Procedures for estimating missing data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the missing data period. (b) For missing feedstock supply rates or waste recycle stream used to determine monthly feedstock consumption or monthly waste recycle stream quantity, you must determine...

  13. 40 CFR 98.75 - Procedures for estimating missing data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the missing data period. (b) For missing feedstock supply rates or waste recycle stream used to determine monthly feedstock consumption or monthly waste recycle stream quantity, you must determine...

  14. Newton law in covariant unimodular F(R) gravity

    NASA Astrophysics Data System (ADS)

    Nojiri, S.; Odintsov, S. D.; Oikonomou, V. K.

    2016-09-01

    We investigate the Newton law in the unimodular F(R) gravity. In the standard F(R) gravity, due to the extra scalar mode, there often appear the large corrections to the Newton law and such models are excluded by the experiments and/or the observations. In the unimodular F(R) gravity, however, the extra scalar mode become not to be dynamical due to the unimodular constraint and there is not any correction to the Newton law. Even in the unimodular Einstein gravity, the Newton law is reproduced but the mechanism is a little bit different from that in the unimodular F(R) gravity. We also investigate the unimodular F(R) gravity in the covariant formulation. In the covariant formulation, we include the three-form field. We show that the three-form field could not have any unwanted property, like ghost nor correction to the Newton law. In the covariant formulation, however, the above extra scalar mode becomes dynamical and could give a correction to the Newton law. We also show that there are no difference in the Friedmann-Robertson-Walker (FRW) dynamics in the non-covariant and covariant formulation.

  15. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  16. Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection.

    PubMed

    Xu, M; Paul, M R

    2016-06-01

    We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20≲D_{λ}≲50, and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization. PMID:27415256

  17. Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Xu, M.; Paul, M. R.

    2016-06-01

    We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20 ≲Dλ≲50 , and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization.

  18. Milepost locations in rural emergency response : the missing piece.

    SciTech Connect

    Armstrong, Hillary Minich

    2004-06-01

    An incident location must be translated into an address that responders can find on the ground. In populated areas it's street name and address number. For sparsely populated areas or highways it's typically road name and nearest milepost number. This is paired with road intersection information to help responders approach the incident as quickly and safely as possible. If responders are new to the area, or for cross-country response, more assistance is needed. If dispatchers had mileposts as points on their maps they could provide this assistance as well as vital information to public safety authorities as the incident unfolds. Mileposts are already universally understood and used. The missing rural response piece is to get milepost locations onto dispatch and control center screens.

  19. Covariances Obtained from an Evaluation of the Neutron Cross Section Standards

    SciTech Connect

    Carlson, A. D.; Badikov, S. A.; Chen, Zhenpeng; Gai, E.; Pronyaev, V. G.; Hale, G. M.; Kawano, T.; Hambsch, F.; Hoffman, H.; Larson, Nancy M; Oli, S.; Smith, D. L.; Tagesen, S.; Vonach, H.

    2008-12-01

    New measurements and an improved evaluation process were used to obtain a new evaluation of the neutron cross section standards. Efforts were made to include as much information as possible on the components of the data uncertainties that were then used to obtain the covariance matrices for the experimental data. Evaluations were produced from this process for the 6Li(n,t), 10B(n, ), 10B(n, 1 ), 197Au(n, ), 235U(n,f), and 238U(n,f) standard cross sections as well as the non-standard 6Li(n,n), 10B(n,n), 238U(n, ) and 239Pu(n,f) cross sections. There is a general increase in the cross sections for most of the new evaluations, by as much as about 5%, compared with the ENDF/B-VI results. Covariance data were obtained for the 6Li(n,t), 6Li(n,n), 10B(n, ), 10B(n, 1 ), 10B(n,n), 197Au(n, ), 235U(n,f), 238U(n,f), 238U(n, ) and 239Pu(n,f) reactions. Also an independent R-Matrix evaluation was produced for the H(n,n) standard cross-section, however, covariance data are not available for this reaction. The evaluations were used in the new ENDF/B-VII library.

  20. Gravity field improvement using global positioning system data from TOPEX/Poseidon - A covariance analysis

    NASA Technical Reports Server (NTRS)

    Bertiger, Willy I.; Wu, J. T.; Wu, Sien C.

    1992-01-01

    The TOPEX/Poseidon satellite data can be used to improve the knowledge of the earth's gravitational field. The GPS data are especially useful for improving the gravity field over the world's oceans, where the current tracking data are sparse. Using realistic scenario for processing 10 days of GPS data, a covariance analysis is performed to obtain the expected improvement to the GEM-T2 gravity field. The large amount of GPS data and the large number of parameters (1979 parameters for the gravity field, plus carrier-phase biases, etc.) required special filtering techniques for efficient solution. The gravity-bin technique is used to compute the covariance matrix associated with the spherical harmonic gravity field. The covariance analysis shows that the GPS data from one 10-day arc of TOPEX/Poseidon with no a priori constraints can resolve medium degree and order (3-26) parameters with sigmas (standard deviations) that are an order of magnitude smaller than the corresponding sigmas of GEM-T2. When the information from GEM-T2 is combined with the TOPEX/Poseidon GPS measurements, an order-of-magnitude improvement is observed in low- and medium-degree terms with significant improvements spread over a wide range of degree and order.

  1. Design, implementation and reporting strategies to reduce the instance and impact of missing patient-reported outcome (PRO) data: a systematic review

    PubMed Central

    Mercieca-Bebber, Rebecca; Palmer, Michael J; Brundage, Michael; Stockler, Martin R; King, Madeleine T

    2016-01-01

    Objectives Patient-reported outcomes (PROs) provide important information about the impact of treatment from the patients' perspective. However, missing PRO data may compromise the interpretability and value of the findings. We aimed to report: (1) a non-technical summary of problems caused by missing PRO data; and (2) a systematic review by collating strategies to: (A) minimise rates of missing PRO data, and (B) facilitate transparent interpretation and reporting of missing PRO data in clinical research. Our systematic review does not address statistical handling of missing PRO data. Data sources MEDLINE and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases (inception to 31 March 2015), and citing articles and reference lists from relevant sources. Eligibility criteria English articles providing recommendations for reducing missing PRO data rates, or strategies to facilitate transparent interpretation and reporting of missing PRO data were included. Methods 2 reviewers independently screened articles against eligibility criteria. Discrepancies were resolved with the research team. Recommendations were extracted and coded according to framework synthesis. Results 117 sources (55% discussion papers, 26% original research) met the eligibility criteria. Design and methodological strategies for reducing rates of missing PRO data included: incorporating PRO-specific information into the protocol; carefully designing PRO assessment schedules and defining termination rules; minimising patient burden; appointing a PRO coordinator; PRO-specific training for staff; ensuring PRO studies are adequately resourced; and continuous quality assurance. Strategies for transparent interpretation and reporting of missing PRO data include utilising auxiliary data to inform analysis; transparently reporting baseline PRO scores, rates and reasons for missing data; and methods for handling missing PRO data. Conclusions The instance of missing PRO data and its

  2. Eddy Covariance Flux Measurements of Pollutant Gases in the Mexico City Urban Area: a Useful Technique to Evaluate Emissions inventories

    NASA Astrophysics Data System (ADS)

    Velasco, E.; Grivicke, R.; Pressley, S.; Allwine, G.; Jobson, T.; Westberg, H.; Lamb, B.; Ramos, R.; Molina, L.

    2007-12-01

    Direct measurements of emissions of pollutant gases that include all major and minor emissions sources in urban areas are a missing requirement to improve and evaluate emissions inventories. The quality of an urban emissions inventory relies on the accuracy of the information of anthropogenic activities, which in many cases is not available, in particular in urban areas of developing countries. As part of the MCMA-2003 field campaign, we demonstrated the feasibility of using eddy covariance (EC) techniques coupled with fast-response sensors to measure fluxes of volatile organic compounds (VOCs) and CO2 from a residential district of Mexico City. Those flux measurements demonstrated to be also a valuable tool to evaluate the emissions inventory used for air quality modeling. With the objective to confirm the representativeness of the 2003 flux measurements in terms of magnitude, composition and diurnal distribution, as well to evaluate the most recent emissions inventory, a second flux system was deployed in a different district of Mexico City during the 2006 MILAGRO field campaign. This system was located in a busy district surrounded by congested avenues close to the center of the city. In 2003 and 2006 fluxes of olefins and CO2 were measured by the EC technique using a Fast Isoprene Sensor calibrated with a propylene standard and an open path Infrared Gas Analyzer (IRGA), respectively. Fluxes of aromatic and oxygenated VOCs were analyzed by Proton Transfer Reaction-Mass Spectroscopy (PTR-MS) and the disjunct eddy covariance (DEC) technique. In 2006 the number of VOCs was extended using a disjunct eddy accumulation (DEA) system. This system collected whole air samples as function of the direction of the vertical wind component, and the samples were analyzed on site by gas chromatography / flame ionization detection (GC-FID). In both studies we found that the urban surface is a net source of CO2 and VOCs. The diurnal patterns were similar, but the 2006 fluxes

  3. A True Eddy Accumulation - Eddy Covariance hybrid for measurements of turbulent trace gas fluxes

    NASA Astrophysics Data System (ADS)

    Siebicke, Lukas

    2016-04-01

    Eddy covariance (EC) is state-of-the-art in directly and continuously measuring turbulent fluxes of carbon dioxide and water vapor. However, low signal-to-noise ratios, high flow rates and missing or complex gas analyzers limit it's application to few scalars. True eddy accumulation, based on conditional sampling ideas by Desjardins in 1972, requires no fast response analyzers and is therefore potentially applicable to a wider range of scalars. Recently we showed possibly the first successful implementation of True Eddy Accumulation (TEA) measuring net ecosystem exchange of carbon dioxide of a grassland. However, most accumulation systems share the complexity of having to store discrete air samples in physical containers representing entire flux averaging intervals. The current study investigates merging principles of eddy accumulation and eddy covariance, which we here refer to as "true eddy accumulation in transient mode" (TEA-TM). This direct flux method TEA-TM combines true eddy accumulation with continuous sampling. The TEA-TM setup is simpler than discrete accumulation methods while avoiding the need for fast response gas analyzers and high flow rates required for EC. We implemented the proposed TEA-TM method and measured fluxes of carbon dioxide (CO2), methane (CH4) and water vapor (H2O) above a mixed beech forest at the Hainich Fluxnet and ICOS site, Germany, using a G2301 laser spectrometer (Picarro Inc., USA). We further simulated a TEA-TM sampling system using measured high frequency CO2 time series from an open-path gas analyzer. We operated TEA-TM side-by-side with open-, enclosed- and closed-path EC flux systems for CO2, H2O and CH4 (LI-7500, LI-7200, LI-6262, LI-7700, Licor, USA, and FGGA LGR, USA). First results show that TEA-TM CO2 fluxes were similar to EC fluxes. Remaining differences were similar to those between the three eddy covariance setups (open-, enclosed- and closed-path gas analyzers). Measured TEA-TM CO2 fluxes from our physical

  4. The case of the missing third.

    PubMed

    Robertson, Robin

    2005-01-01

    How is it that form arises out of chaos? In attempting to deal with this primary question, time and again a "Missing Third" is posited that lies between extremes. The problem of the "Missing Third" can be traced through nearly the entire history of thought. The form it takes, the problems that arise from it, the solutions suggested for resolving it, are each representative of an age. This paper traces the issue from Plato and Parmenides in the 4th--5th centuries, B.C.; to Neoplatonism in the 3rd--5th centuries; to Locke and Descartes in the 17th century; on to Berkeley and Kant in the 18th century; Fechner and Wundt in the 19th century; to behaviorism and Gestalt psychology, Jung, early in the 20th century, ethology and cybernetics later in the 20th century, then culminates late in the 20th century, with chaos theory.

  5. Detrended fluctuation analysis with missing data

    NASA Astrophysics Data System (ADS)

    Løvsletten, Ola

    2016-04-01

    Detrended fluctuation analysis (DFA) has become a popular tool for studying the scaling behavior of time series in a wide range of scientific disciplines. Many geophysical time series contain "gaps", meaning that some observations of a regularly sampled time series are missing. We show how DFA can be modified to properly handle signals with missing data without the need for interpolation or re-sampling. A new result is presented which states that one can write the fluctuation function in terms of a weighted sum of variograms (also known as second-order structure functions). In the presence of gaps this new estimator is equal in expectation to the fluctuation function in the gap-free case. A small-sample Monte Carlo study, as well as theoretical argument, show the superiority of the proposed method against mean-filling, linear interpolation and resampling.

  6. Genome-Wide Scan for Adaptive Divergence and Association with Population-Specific Covariates.

    PubMed

    Gautier, Mathieu

    2015-12-01

    In population genomics studies, accounting for the neutral covariance structure across population allele frequencies is critical to improve the robustness of genome-wide scan approaches. Elaborating on the BayEnv model, this study investigates several modeling extensions (i) to improve the estimation accuracy of the population covariance matrix and all the related measures, (ii) to identify significantly overly differentiated SNPs based on a calibration procedure of the XtX statistics, and (iii) to consider alternative covariate models for analyses of association with population-specific covariables. In particular, the auxiliary variable model allows one to deal with multiple testing issues and, providing the relative marker positions are available, to capture some linkage disequilibrium information. A comprehensive simulation study was carried out to evaluate the performances of these different models. Also, when compared in terms of power, robustness, and computational efficiency to five other state-of-the-art genome-scan methods (BayEnv2, BayScEnv, BayScan, flk, and lfmm), the proposed approaches proved highly effective. For illustration purposes, genotyping data on 18 French cattle breeds were analyzed, leading to the identification of 13 strong signatures of selection. Among these, four (surrounding the KITLG, KIT, EDN3, and ALB genes) contained SNPs strongly associated with the piebald coloration pattern while a fifth (surrounding PLAG1) could be associated to morphological differences across the populations. Finally, analysis of Pool-Seq data from 12 populations of Littorina saxatilis living in two different ecotypes illustrates how the proposed framework might help in addressing relevant ecological issues in nonmodel species. Overall, the proposed methods define a robust Bayesian framework to characterize adaptive genetic differentiation across populations. The BayPass program implementing the different models is available at http://www1.montpellier

  7. Bayesian recursive mixed linear model for gene expression analyses with continuous covariates.

    PubMed

    Casellas, J; Ibáñez-Escriche, N

    2012-01-01

    The analysis of microarray gene expression data has experienced a remarkable growth in scientific research over the last few years and is helping to decipher the genetic background of several productive traits. Nevertheless, most analytical approaches have relied on the comparison of 2 (or a few) well-defined groups of biological conditions where the continuous covariates have no sense (e.g., healthy vs. cancerous cells). Continuous effects could be of special interest when analyzing gene expression in animal production-oriented studies (e.g., birth weight), although very few studies address this peculiarity in the animal science framework. Within this context, we have developed a recursive linear mixed model where not only are linear covariates accounted for during gene expression analyses but also hierarchized and the effects of their genetic, environmental, and residual components on differential gene expression inferred independently. This parameterization allows a step forward in the inference of differential gene expression linked to a given quantitative trait such as birth weight. The statistical performance of this recursive model was exemplified under simulation by accounting for different sample sizes (n), heritabilities for the quantitative trait (h(2)), and magnitudes of differential gene expression (λ). It is important to highlight that statistical power increased with n, h(2), and λ, and the recursive model exceeded the standard linear mixed model with linear (nonrecursive) covariates in the majority of scenarios. This new parameterization would provide new insights about gene expression in the animal science framework, opening a new research scenario where within-covariate sources of differential gene expression could be individualized and estimated. The source code of the program accommodating these analytical developments and additional information about practical aspects on running the program are freely available by request to the corresponding

  8. Genome-Wide Scan for Adaptive Divergence and Association with Population-Specific Covariates.

    PubMed

    Gautier, Mathieu

    2015-12-01

    In population genomics studies, accounting for the neutral covariance structure across population allele frequencies is critical to improve the robustness of genome-wide scan approaches. Elaborating on the BayEnv model, this study investigates several modeling extensions (i) to improve the estimation accuracy of the population covariance matrix and all the related measures, (ii) to identify significantly overly differentiated SNPs based on a calibration procedure of the XtX statistics, and (iii) to consider alternative covariate models for analyses of association with population-specific covariables. In particular, the auxiliary variable model allows one to deal with multiple testing issues and, providing the relative marker positions are available, to capture some linkage disequilibrium information. A comprehensive simulation study was carried out to evaluate the performances of these different models. Also, when compared in terms of power, robustness, and computational efficiency to five other state-of-the-art genome-scan methods (BayEnv2, BayScEnv, BayScan, flk, and lfmm), the proposed approaches proved highly effective. For illustration purposes, genotyping data on 18 French cattle breeds were analyzed, leading to the identification of 13 strong signatures of selection. Among these, four (surrounding the KITLG, KIT, EDN3, and ALB genes) contained SNPs strongly associated with the piebald coloration pattern while a fifth (surrounding PLAG1) could be associated to morphological differences across the populations. Finally, analysis of Pool-Seq data from 12 populations of Littorina saxatilis living in two different ecotypes illustrates how the proposed framework might help in addressing relevant ecological issues in nonmodel species. Overall, the proposed methods define a robust Bayesian framework to characterize adaptive genetic differentiation across populations. The BayPass program implementing the different models is available at http://www1.montpellier.inra.fr/CBGP/software/baypass/.

  9. Climatic conditions cause complex patterns of covariation between demographic traits in a long-lived raptor.

    PubMed

    Herfindal, Ivar; van de Pol, Martijn; Nielsen, Jan T; Sæther, Bernt-Erik; Møller, Anders P

    2015-05-01

    Environmental variation can induce life-history changes that can last over a large part of the lifetime of an organism. If multiple demographic traits are affected, expected changes in climate may influence environmental covariances among traits in a complex manner. Thus, examining the consequences of environmental fluctuations requires that individual information at multiple life stages is available, which is particularly challenging in long-lived species. Here, we analyse how variation in climatic conditions occurring in the year of hatching of female goshawks Accipiter gentilis (L.) affects age-specific variation in demographic traits and lifetime reproductive success (LRS). LRS decreased with increasing temperature in April in the year of hatching, due to lower breeding frequency and shorter reproductive life span. In contrast, the probability for a female to successfully breed was higher in years with a warm April, but lower LRS of the offspring in these years generated a negative covariance among fecundity rates among generations. The mechanism by which climatic conditions generated cohort effects was likely through influencing the quality of the breeding segment of the population in a given year, as the proportion of pigeons in the diet during the breeding period was positively related to annual and LRS, and the diet of adult females that hatched in warm years contained fewer pigeons. Climatic conditions experienced during different stages of individual life histories caused complex patterns of environmental covariance among demographic traits even across generations. Such environmental covariances may either buffer or amplify impacts of climate change on population growth, emphasizing the importance of considering demographic changes during the complete life history of individuals when predicting the effect of climatic change on population dynamics of long-lived species.

  10. Obstructive Uropathy Secondary to Missed Acute Appendicitis

    PubMed Central

    2016-01-01

    Hydronephrosis is a rare complication of acute appendicitis. We present a case of missed appendicitis in a 52-year-old female which presented as a right-sided hydronephrosis. 2 days after admission to the Department of Urology CT revealed acute appendicitis for what open appendectomy was performed. Acute appendicitis can lead to obstructive uropathy by periappendiceal inflammation due to adjacency. Urologists, surgeons, and emergency physicians should be aware of this rare complication of atypical acute appendicitis.

  11. Missing solution in a Cornell potential

    SciTech Connect

    Castro, L.B.; Castro, A.S. de

    2013-11-15

    Missing bound-state solutions for fermions in the background of a Cornell potential consisting of a mixed scalar–vector–pseudoscalar coupling is examined. Charge-conjugation operation, degeneracy and localization are discussed. -- Highlights: •The Dirac equation with scalar–vector–pseudoscalar Cornell potential is investigated. •The isolated solution from the Sturm–Liouville problem is found. •Charge-conjugation operation, degeneracy and localization are discussed.

  12. Missing and forbidden links in mutualistic networks.

    PubMed

    Olesen, Jens M; Bascompte, Jordi; Dupont, Yoko L; Elberling, Heidi; Rasmussen, Claus; Jordano, Pedro

    2011-03-01

    Ecological networks are complexes of interacting species, but not all potential links among species are realized. Unobserved links are either missing or forbidden. Missing links exist, but require more sampling or alternative ways of detection to be verified. Forbidden links remain unobservable, irrespective of sampling effort. They are caused by linkage constraints. We studied one Arctic pollination network and two Mediterranean seed-dispersal networks. In the first, for example, we recorded flower-visit links for one full season, arranged data in an interaction matrix and got a connectance C of 15 per cent. Interaction accumulation curves documented our sampling of interactions through observation of visits to be robust. Then, we included data on pollen from the body surface of flower visitors as an additional link 'currency'. This resulted in 98 new links, missing from the visitation data. Thus, the combined visit-pollen matrix got an increased C of 20 per cent. For the three networks, C ranged from 20 to 52 per cent, and thus the percentage of unobserved links (100 - C) was 48 to 80 per cent; these were assumed forbidden because of linkage constraints and not missing because of under-sampling. Phenological uncoupling (i.e. non-overlapping phenophases between interacting mutualists) is one kind of constraint, and it explained 22 to 28 per cent of all possible, but unobserved links. Increasing phenophase overlap between species increased link probability, but extensive overlaps were required to achieve a high probability. Other kinds of constraint, such as size mismatch and accessibility limitations, are briefly addressed.

  13. A Probability Based Framework for Testing the Missing Data Mechanism

    ERIC Educational Resources Information Center

    Lin, Johnny Cheng-Han

    2013-01-01

    Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…

  14. 40 CFR 75.31 - Initial missing data procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Initial missing data procedures. 75.31 Section 75.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.31 Initial missing data procedures. (a) During the first...

  15. 40 CFR 75.31 - Initial missing data procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Initial missing data procedures. 75.31 Section 75.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.31 Initial missing...

  16. 40 CFR 75.31 - Initial missing data procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Initial missing data procedures. 75.31 Section 75.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.31 Initial missing...

  17. 40 CFR 75.31 - Initial missing data procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Initial missing data procedures. 75.31 Section 75.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.31 Initial missing...

  18. Best Practices for Missing Data Management in Counseling Psychology

    ERIC Educational Resources Information Center

    Schlomer, Gabriel L.; Bauman, Sheri; Card, Noel A.

    2010-01-01

    This article urges counseling psychology researchers to recognize and report how missing data are handled, because consumers of research cannot accurately interpret findings without knowing the amount and pattern of missing data or the strategies that were used to handle those data. Patterns of missing data are reviewed, and some of the common…

  19. 40 CFR 98.265 - Procedures for estimating missing data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter must be used in the calculations as specified in...

  20. 40 CFR 98.265 - Procedures for estimating missing data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. (a) For each missing value of the inorganic carbon content of phosphate rock or carbon dioxide (by origin), you must use the appropriate default factor provided in Table Z-1...

  1. Silver Alerts and the Problem of Missing Adults with Dementia

    ERIC Educational Resources Information Center

    Carr, Dawn; Muschert, Glenn W.; Kinney, Jennifer; Robbins, Emily; Petonito, Gina; Manning, Lydia; Brown, J. Scott

    2010-01-01

    In the months following the introduction of the National AMBER (America's Missing: Broadcast Emergency Response) Alert plan used to locate missing and abducted children, Silver Alert programs began to emerge. These programs use the same infrastructure and approach to find a different missing population, cognitively impaired older adults. By late…

  2. Girls' Portraits of Desire: Picturing a Missing Discourse

    ERIC Educational Resources Information Center

    Allen, Louisa

    2013-01-01

    This paper revisits the missing discourse of female desire [Fine, M. 1988. Sexuality, schooling and adolescent females: The missing discourse of desire. "Harvard Educational Review" 58, no. 1: 29-53] in secondary schools. Instead of echoing previous studies that have documented how female desire is missing, this research starts from the premise…

  3. Missing Not at Random Models for Latent Growth Curve Analyses

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2011-01-01

    The past decade has seen a noticeable shift in missing data handling techniques that assume a missing at random (MAR) mechanism, where the propensity for missing data on an outcome is related to other analysis variables. Although MAR is often reasonable, there are situations where this assumption is unlikely to hold, leading to biased parameter…

  4. A Review of Missing Data Handling Methods in Education Research

    ERIC Educational Resources Information Center

    Cheema, Jehanzeb R.

    2014-01-01

    Missing data are a common occurrence in survey-based research studies in education, and the way missing values are handled can significantly affect the results of analyses based on such data. Despite known problems with performance of some missing data handling methods, such as mean imputation, many researchers in education continue to use those…

  5. 40 CFR 98.315 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For the petroleum coke input procedure in § 98.313(b), a complete record of all... such estimates. (a) For each missing value of the monthly carbon content of calcined petroleum coke...

  6. 40 CFR 98.215 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... for estimating missing data. (a) A complete record of all measured parameters used in the GHG... such estimates. (b) For each missing value of monthly carbonate consumed, monthly carbonate output,...

  7. 40 CFR 98.175 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... used for all such estimates. (a) For each missing data for the carbon content of inputs and outputs...

  8. 40 CFR 98.265 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... for all such estimates. (a) For each missing value of the inorganic carbon content of phosphate...

  9. 40 CFR 98.245 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data. 98.245 Section 98.245 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... estimating missing data. For missing feedstock flow rates, product flow rates, and carbon contents, use...

  10. 40 CFR 98.45 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for CO2... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data. 98.45 Section 98.45 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED)...

  11. 40 CFR 98.35 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 CFR part 75, the missing data substitution procedures in 40 CFR part 75 shall be followed for CO2... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. Whenever a quality-assured value of a required parameter...

  12. 40 CFR 98.75 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... document and keep records of the procedures used for all such estimates. (a) For missing data on...

  13. On Testability of Missing Data Mechanisms in Incomplete Data Sets

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2011-01-01

    This article is concerned with the question of whether the missing data mechanism routinely referred to as missing completely at random (MCAR) is statistically examinable via a test for lack of distributional differences between groups with observed and missing data, and related consequences. A discussion is initially provided, from a formal logic…

  14. MISSE 6 Polymer Film Tensile Experiment

    NASA Technical Reports Server (NTRS)

    Miller, Sharon K. R.; Dever, Joyce A.; Banks, Bruce A.; Waters, Deborah L.; Sechkar, Edward; Kline, Sara

    2010-01-01

    The Polymer Film Tensile Experiment (PFTE) was flown as part of Materials International Space Station Experiment 6 (MISSE 6). The purpose of the experiment was to expose a variety of polymer films to the low Earth orbital environment under both relaxed and tension conditions. The polymers selected are those commonly used for spacecraft thermal control and those under consideration for use in spacecraft applications such as sunshields, solar sails, and inflatable and deployable structures. The dog-bone shaped samples of polymers that were flown were exposed on both the side of the MISSE 6 Passive Experiment Container (PEC) that was facing into the ram direction (receiving atomic oxygen, ultraviolet (UV) radiation, ionizing radiation, and thermal cycling) and the wake facing side (which was supposed to have experienced predominantly the same environmental effects except for atomic oxygen which was present due to reorientation of the International Space Station). A few of the tensile samples were coated with vapor deposited aluminum on the back and wired to determine the point in the flight when the tensile sample broke as recorded by a change in voltage that was stored on battery powered data loggers for post flight retrieval and analysis. The data returned on the data loggers was not usable. However, post retrieval observation and analysis of the samples was performed. This paper describes the preliminary analysis and observations of the polymers exposed on the MISSE 6 PFTE.

  15. Missing dark matter in the local universe

    NASA Astrophysics Data System (ADS)

    Karachentsev, I. D.

    2012-04-01

    A sample of 11 thousand galaxies with radial velocities V LG < 3500 km/s is used to study the features of the local distribution of luminous (stellar) and dark matter within a sphere of radius of around 50 Mpc around us. The average density of matter in this volume, Ω m,loc = 0.08 ± 0.02, turns out to be much lower than the global cosmic density Ω m,glob = 0.28 ± 0.03. We discuss three possible explanations of this paradox: 1) galaxy groups and clusters are surrounded by extended dark halos, the major part of the mass of which is located outside their virial radii; 2) the considered local volume of the Universe is not representative, being situated inside a giant void; and 3) the bulk of matter in the Universe is not related to clusters and groups, but is rather distributed between them in the form of massive dark clumps. Some arguments in favor of the latter assumption are presented. Besides the two well-known inconsistencies of modern cosmological models with the observational data: the problem of missing satellites of normal galaxies and the problem of missing baryons, there arises another one—the issue of missing dark matter.

  16. Searching for the missing baryons in clusters

    PubMed Central

    Rasheed, Bilhuda; Bahcall, Neta; Bode, Paul

    2011-01-01

    Observations of clusters of galaxies suggest that they contain fewer baryons (gas plus stars) than the cosmic baryon fraction. This “missing baryon” puzzle is especially surprising for the most massive clusters, which are expected to be representative of the cosmic matter content of the universe (baryons and dark matter). Here we show that the baryons may not actually be missing from clusters, but rather are extended to larger radii than typically observed. The baryon deficiency is typically observed in the central regions of clusters (∼0.5 the virial radius). However, the observed gas-density profile is significantly shallower than the mass-density profile, implying that the gas is more extended than the mass and that the gas fraction increases with radius. We use the observed density profiles of gas and mass in clusters to extrapolate the measured baryon fraction as a function of radius and as a function of cluster mass. We find that the baryon fraction reaches the cosmic value near the virial radius for all groups and clusters above . This suggests that the baryons are not missing, they are simply located in cluster outskirts. Heating processes (such as shock-heating of the intracluster gas, supernovae, and Active Galactic Nuclei feedback) likely contribute to this expanded distribution. Upcoming observations should be able to detect these baryons. PMID:21321229

  17. Missing doses in the life span study of Japanese atomic bomb survivors.

    PubMed

    Richardson, David B; Wing, Steve; Cole, Stephen R

    2013-03-15

    The Life Span Study of atomic bomb survivors is an important source of risk estimates used to inform radiation protection and compensation. Interviews with survivors in the 1950s and 1960s provided information needed to estimate radiation doses for survivors proximal to ground zero. Because of a lack of interview or the complexity of shielding, doses are missing for 7,058 of the 68,119 proximal survivors. Recent analyses excluded people with missing doses, and despite the protracted collection of interview information necessary to estimate some survivors' doses, defined start of follow-up as October 1, 1950, for everyone. We describe the prevalence of missing doses and its association with mortality, distance from hypocenter, city, age, and sex. Missing doses were more common among Nagasaki residents than among Hiroshima residents (prevalence ratio = 2.05; 95% confidence interval: 1.96, 2.14), among people who were closer to ground zero than among those who were far from it, among people who were younger at enrollment than among those who were older, and among males than among females (prevalence ratio = 1.22; 95% confidence interval: 1.17, 1.28). Missing dose was associated with all-cancer and leukemia mortality, particularly during the first years of follow-up (all-cancer rate ratio = 2.16, 95% confidence interval: 1.51, 3.08; and leukemia rate ratio = 4.28, 95% confidence interval: 1.72, 10.67). Accounting for missing dose and late entry should reduce bias in estimated dose-mortality associations.

  18. Missing Doses in the Life Span Study of Japanese Atomic Bomb Survivors

    PubMed Central

    Richardson, David B.; Wing, Steve; Cole, Stephen R.

    2013-01-01

    The Life Span Study of atomic bomb survivors is an important source of risk estimates used to inform radiation protection and compensation. Interviews with survivors in the 1950s and 1960s provided information needed to estimate radiation doses for survivors proximal to ground zero. Because of a lack of interview or the complexity of shielding, doses are missing for 7,058 of the 68,119 proximal survivors. Recent analyses excluded people with missing doses, and despite the protracted collection of interview information necessary to estimate some survivors' doses, defined start of follow-up as October 1, 1950, for everyone. We describe the prevalence of missing doses and its association with mortality, distance from hypocenter, city, age, and sex. Missing doses were more common among Nagasaki residents than among Hiroshima residents (prevalence ratio = 2.05; 95% confidence interval: 1.96, 2.14), among people who were closer to ground zero than among those who were far from it, among people who were younger at enrollment than among those who were older, and among males than among females (prevalence ratio = 1.22; 95% confidence interval: 1.17, 1.28). Missing dose was associated with all-cancer and leukemia mortality, particularly during the first years of follow-up (all-cancer rate ratio = 2.16, 95% confidence interval: 1.51, 3.08; and leukemia rate ratio = 4.28, 95% confidence interval: 1.72, 10.67). Accounting for missing dose and late entry should reduce bias in estimated dose-mortality associations. PMID:23429722

  19. Missing doses in the life span study of Japanese atomic bomb survivors.

    PubMed

    Richardson, David B; Wing, Steve; Cole, Stephen R

    2013-03-15

    The Life Span Study of atomic bomb survivors is an important source of risk estimates used to inform radiation protection and compensation. Interviews with survivors in the 1950s and 1960s provided information needed to estimate radiation doses for survivors proximal to ground zero. Because of a lack of interview or the complexity of shielding, doses are missing for 7,058 of the 68,119 proximal survivors. Recent analyses excluded people with missing doses, and despite the protracted collection of interview information necessary to estimate some survivors' doses, defined start of follow-up as October 1, 1950, for everyone. We describe the prevalence of missing doses and its association with mortality, distance from hypocenter, city, age, and sex. Missing doses were more common among Nagasaki residents than among Hiroshima residents (prevalence ratio = 2.05; 95% confidence interval: 1.96, 2.14), among people who were closer to ground zero than among those who were far from it, among people who were younger at enrollment than among those who were older, and among males than among females (prevalence ratio = 1.22; 95% confidence interval: 1.17, 1.28). Missing dose was associated with all-cancer and leukemia mortality, particularly during the first years of follow-up (all-cancer rate ratio = 2.16, 95% confidence interval: 1.51, 3.08; and leukemia rate ratio = 4.28, 95% confidence interval: 1.72, 10.67). Accounting for missing dose and late entry should reduce bias in estimated dose-mortality associations. PMID:23429722

  20. Testing power-law cross-correlations: rescaled covariance test

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2013-10-01

    We introduce a new test for detection of power-law cross-correlations among a pair of time series - the rescaled covariance test. The test is based on a power-law divergence of the covariance of the partial sums of the long-range cross-correlated processes. Utilizing a heteroskedasticity and auto-correlation robust estimator of the long-term covariance, we develop a test with desirable statistical properties which is well able to distinguish between short- and long-range cross-correlations. Such test should be used as a starting point in the analysis of long-range cross-correlations prior to an estimation of bivariate long-term memory parameters. As an application, we show that the relationship between volatility and traded volume, and volatility and returns in the financial markets can be labeled as the power-law cross-correlated one.