Sample records for background error covariances

  1. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the

  2. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  3. Generalized Background Error covariance matrix model (GEN_BE v2.0)

    NASA Astrophysics Data System (ADS)

    Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.

    2014-07-01

    The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model to allow for a simpler, flexible, robust, and community-oriented framework that gathers methods used by meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks and showing some of the new features on data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to involve new control variables. While the generation of the background errors statistics code has been first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily extended to other domains of science and be chosen as a testbed for diagnostic and new modeling of B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.

  4. Generalized background error covariance matrix model (GEN_BE v2.0)

    NASA Astrophysics Data System (ADS)

    Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.; Barré, J.

    2015-03-01

    The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model. GEN_BE allows for a simpler, flexible, robust, and community-oriented framework that gathers methods used by some meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks of different modeling of B and showing some of the new features in data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to implement new control variables. While the generation of the background errors statistics code was first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily applied to other domains of science and chosen to diagnose and model B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.

  5. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  6. Impact of variational assimilation using multivariate background error covariances on the simulation of monsoon depressions over India

    NASA Astrophysics Data System (ADS)

    Dhanya, M.; Chandrasekar, A.

    2016-02-01

    The background error covariance structure influences a variational data assimilation system immensely. The simulation of a weather phenomenon like monsoon depression can hence be influenced by the background correlation information used in the analysis formulation. The Weather Research and Forecasting Model Data assimilation (WRFDA) system includes an option for formulating multivariate background correlations for its three-dimensional variational (3DVar) system (cv6 option). The impact of using such a formulation in the simulation of three monsoon depressions over India is investigated in this study. Analysis and forecast fields generated using this option are compared with those obtained using the default formulation for regional background error correlations (cv5) in WRFDA and with a base run without any assimilation. The model rainfall forecasts are compared with rainfall observations from the Tropical Rainfall Measurement Mission (TRMM) and the other model forecast fields are compared with a high-resolution analysis as well as with European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis. The results of the study indicate that inclusion of additional correlation information in background error statistics has a moderate impact on the vertical profiles of relative humidity, moisture convergence, horizontal divergence and the temperature structure at the depression centre at the analysis time of the cv5/cv6 sensitivity experiments. Moderate improvements are seen in two of the three depressions investigated in this study. An improved thermodynamic and moisture structure at the initial time is expected to provide for improved rainfall simulation. The results of the study indicate that the skill scores of accumulated rainfall are somewhat better for the cv6 option as compared to the cv5 option for at least two of the three depression cases studied, especially at the higher threshold levels. Considering the importance of utilising improved

  7. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  8. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  9. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  10. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  11. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  12. Position Error Covariance Matrix Validation and Correction

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  13. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  14. Comparative test on several forms of background error covariance in 3DVar

    NASA Astrophysics Data System (ADS)

    Shao, Aimei

    2013-04-01

    The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the

  15. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  16. Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds

    NASA Astrophysics Data System (ADS)

    Mitra, Arpita

    2017-12-01

    The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.

  17. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  18. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  19. Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.

    2016-06-01

    The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.

  20. Error Covariance Penalized Regression: A novel multivariate model combining penalized regression with multivariate error structure.

    PubMed

    Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C

    2018-06-29

    A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.

  1. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  2. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Are Low-order Covariance Estimates Useful in Error Analyses?

    NASA Astrophysics Data System (ADS)

    Baker, D. F.; Schimel, D.

    2005-12-01

    Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb

  4. Bio-Optical Data Assimilation With Observational Error Covariance Derived From an Ensemble of Satellite Images

    NASA Astrophysics Data System (ADS)

    Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter

    2018-03-01

    An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.

  5. Investigating the role of background and observation error correlations in improving a model forecast of forest carbon balance using four dimensional variational data assimilation.

    NASA Astrophysics Data System (ADS)

    Pinnington, Ewan; Casella, Eric; Dance, Sarah; Lawless, Amos; Morison, James; Nichols, Nancy; Wilkinson, Matthew; Quaife, Tristan

    2016-04-01

    Forest ecosystems play an important role in sequestering human emitted carbon-dioxide from the atmosphere and therefore greatly reduce the effect of anthropogenic induced climate change. For that reason understanding their response to climate change is of great importance. Efforts to implement variational data assimilation routines with functional ecology models and land surface models have been limited, with sequential and Markov chain Monte Carlo data assimilation methods being prevalent. When data assimilation has been used with models of carbon balance, background "prior" errors and observation errors have largely been treated as independent and uncorrelated. Correlations between background errors have long been known to be a key aspect of data assimilation in numerical weather prediction. More recently, it has been shown that accounting for correlated observation errors in the assimilation algorithm can considerably improve data assimilation results and forecasts. In this paper we implement a 4D-Var scheme with a simple model of forest carbon balance, for joint parameter and state estimation and assimilate daily observations of Net Ecosystem CO2 Exchange (NEE) taken at the Alice Holt forest CO2 flux site in Hampshire, UK. We then investigate the effect of specifying correlations between parameter and state variables in background error statistics and the effect of specifying correlations in time between observation error statistics. The idea of including these correlations in time is new and has not been previously explored in carbon balance model data assimilation. In data assimilation, background and observation error statistics are often described by the background error covariance matrix and the observation error covariance matrix. We outline novel methods for creating correlated versions of these matrices, using a set of previously postulated dynamical constraints to include correlations in the background error statistics and a Gaussian correlation

  6. Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan

    2017-01-01

    Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  7. Spectral characteristics of background error covariance and multiscale data assimilation

    DOE PAGES

    Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  8. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    NASA Astrophysics Data System (ADS)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  9. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    PubMed Central

    Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J

    2009-01-01

    Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial

  10. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.

    PubMed

    Ding, Cherng G; Jane, Ten-Der

    2012-09-01

    In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.

  12. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  13. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  15. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  16. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of

  17. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  18. Natural Covariant Planck Scale Cutoffs and the Cosmic Microwave Background Spectrum.

    PubMed

    Chatwin-Davies, Aidan; Kempf, Achim; Martin, Robert T W

    2017-07-21

    We calculate the impact of quantum gravity-motivated ultraviolet cutoffs on inflationary predictions for the cosmic microwave background spectrum. We model the ultraviolet cutoffs fully covariantly to avoid possible artifacts of covariance breaking. Imposing these covariant cutoffs results in the production of small, characteristically k-dependent oscillations in the spectrum. The size of the effect scales linearly with the ratio of the Planck to Hubble lengths during inflation. Consequently, the relative size of the effect could be as large as one part in 10^{5}; i.e., eventual observability may not be ruled out.

  19. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  20. The use of a covariate reduces experimental error in nutrient digestion studies in growing pigs

    USDA-ARS?s Scientific Manuscript database

    Covariance analysis limits error, the degree of nuisance variation, and overparameterizing factors to accurately measure treatment effects. Data dealing with growth, carcass composition, and genetics often utilize covariates in data analysis. In contrast, nutritional studies typically do not. The ob...

  1. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    ERIC Educational Resources Information Center

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  2. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to

  3. Stochastic process approximation for recursive estimation with guaranteed bound on the error covariance

    NASA Technical Reports Server (NTRS)

    Menga, G.

    1975-01-01

    An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.

  4. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in

  5. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

    ERIC Educational Resources Information Center

    Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

    2013-01-01

    In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

  6. Covariate Measurement Error Correction for Student Growth Percentiles Using the SIMEX Method

    ERIC Educational Resources Information Center

    Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W.

    2015-01-01

    In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…

  7. Bias and heteroscedastic memory error in self-reported health behavior: an investigation using covariance structure analysis

    PubMed Central

    Kupek, Emil

    2002-01-01

    Background Frequent use of self-reports for investigating recent and past behavior in medical research requires statistical techniques capable of analyzing complex sources of bias associated with this methodology. In particular, although decreasing accuracy of recalling more distant past events is commonplace, the bias due to differential in memory errors resulting from it has rarely been modeled statistically. Methods Covariance structure analysis was used to estimate the recall error of self-reported number of sexual partners for past periods of varying duration and its implication for the bias. Results Results indicated increasing levels of inaccuracy for reports about more distant past. Considerable positive bias was found for a small fraction of respondents who reported ten or more partners in the last year, last two years and last five years. This is consistent with the effect of heteroscedastic random error where the majority of partners had been acquired in the more distant past and therefore were recalled less accurately than the partners acquired more recently to the time of interviewing. Conclusions Memory errors of this type depend on the salience of the events recalled and are likely to be present in many areas of health research based on self-reported behavior. PMID:12435276

  8. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the

  9. A generalized spatiotemporal covariance model for stationary background in analysis of MEG data.

    PubMed

    Plis, S M; Schmidt, D M; Jun, S C; Ranken, D M

    2006-01-01

    Using a noise covariance model based on a single Kronecker product of spatial and temporal covariance in the spatiotemporal analysis of MEG data was demonstrated to provide improvement in the results over that of the commonly used diagonal noise covariance model. In this paper we present a model that is a generalization of all of the above models. It describes models based on a single Kronecker product of spatial and temporal covariance as well as more complicated multi-pair models together with any intermediate form expressed as a sum of Kronecker products of spatial component matrices of reduced rank and their corresponding temporal covariance matrices. The model provides a framework for controlling the tradeoff between the described complexity of the background and computational demand for the analysis using this model. Ways to estimate the value of the parameter controlling this tradeoff are also discussed.

  10. Quantifying Adventitious Error in a Covariance Structure as a Random Effect

    PubMed Central

    Wu, Hao; Browne, Michael W.

    2017-01-01

    We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463

  11. The estimation error covariance matrix for the ideal state reconstructor with measurement noise

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1988-01-01

    A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.

  12. Adaptive error covariances estimation methods for ensemble Kalman filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Yicun, E-mail: zhen@math.psu.edu; Harlim, John, E-mail: jharlim@psu.edu

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for usingmore » information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.« less

  13. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  14. Robust Adaptive Beamforming with Sensor Position Errors Using Weighted Subspace Fitting-Based Covariance Matrix Reconstruction.

    PubMed

    Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang

    2018-05-08

    When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.

  15. Bayesian correction for covariate measurement error: A frequentist evaluation and comparison with regression calibration.

    PubMed

    Bartlett, Jonathan W; Keogh, Ruth H

    2018-06-01

    Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.

  16. Using Analysis of Covariance (ANCOVA) with Fallible Covariates

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Aguinis, Herman

    2011-01-01

    Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…

  17. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    PubMed

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  18. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  19. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  20. Preconditioning of the background error covariance matrix in data assimilation for the Caspian Sea

    NASA Astrophysics Data System (ADS)

    Arcucci, Rossella; D'Amore, Luisa; Toumi, Ralf

    2017-06-01

    Data Assimilation (DA) is an uncertainty quantification technique used for improving numerical forecasted results by incorporating observed data into prediction models. As a crucial point into DA models is the ill conditioning of the covariance matrices involved, it is mandatory to introduce, in a DA software, preconditioning methods. Here we present first studies concerning the introduction of two different preconditioning methods in a DA software we are developing (we named S3DVAR) which implements a Scalable Three Dimensional Variational Data Assimilation model for assimilating sea surface temperature (SST) values collected into the Caspian Sea by using the Regional Ocean Modeling System (ROMS) with observations provided by the Group of High resolution sea surface temperature (GHRSST). We also present the algorithmic strategies we employ.

  1. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Revised error propagation of 40Ar/39Ar data, including covariances

    NASA Astrophysics Data System (ADS)

    Vermeesch, Pieter

    2015-12-01

    The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from http://redux.london-geochron.com.

  3. Triple collocation-based estimation of spatially correlated observation error covariance in remote sensing soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang

    2018-01-01

    Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.

  4. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation

    DTIC Science & Technology

    2010-01-01

    1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is

  6. Noise covariance incorporated MEG-MUSIC algorithm: a method for multiple-dipole estimation tolerant of the influence of background brain activity.

    PubMed

    Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y

    1997-09-01

    This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.

  7. Are your covariates under control? How normalization can re-introduce covariate effects.

    PubMed

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  8. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  9. Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2016-02-01

    The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.

  10. Ferris Wheels and Filling Bottles: A Case of a Student's Transfer of Covariational Reasoning across Tasks with Different Backgrounds and Features

    ERIC Educational Resources Information Center

    Johnson, Heather Lynn; McClintock, Evan; Hornbein, Peter

    2017-01-01

    Using an actor-oriented perspective on transfer, we report a case of a student's transfer of covariational reasoning across tasks involving different backgrounds and features. In this study, we investigated the research question: How might a student's covariational reasoning on Ferris wheel tasks, involving attributes of distance, width, and…

  11. Covariate Imbalance and Precision in Measuring Treatment Effects

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2011-01-01

    Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…

  12. Precomputing Process Noise Covariance for Onboard Sequential Filters

    NASA Technical Reports Server (NTRS)

    Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell

    2017-01-01

    Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.

  13. Precomputing Process Noise Covariance for Onboard Sequential Filters

    NASA Technical Reports Server (NTRS)

    Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell

    2017-01-01

    Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory, using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis publications is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.

  14. Galaxy-galaxy lensing estimators and their covariance properties

    NASA Astrophysics Data System (ADS)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  15. Evaluation of orbits with incomplete knowledge of the mathematical expectancy and the matrix of covariation of errors

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.

    1980-01-01

    The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).

  16. Robust covariance estimation of galaxy-galaxy weak lensing: validation and limitation of jackknife covariance

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Takada, Masahiro; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma

    2017-09-01

    We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations and their inherent halo catalogues. Using the mock catalogue to study the error covariance matrix of galaxy-galaxy weak lensing, we compare the full covariance with the 'jackknife' (JK) covariance, the method often used in the literature that estimates the covariance from the resamples of the data itself. We show that there exists the variation of JK covariance over realizations of mock lensing measurements, while the average JK covariance over mocks can give a reasonably accurate estimation of the true covariance up to separations comparable with the size of JK subregion. The scatter in JK covariances is found to be ∼10 per cent after we subtract the lensing measurement around random points. However, the JK method tends to underestimate the covariance at the larger separations, more increasingly for a survey with a higher number density of source galaxies. We apply our method to the Sloan Digital Sky Survey (SDSS) data, and show that the 48 mock SDSS catalogues nicely reproduce the signals and the JK covariance measured from the real data. We then argue that the use of the accurate covariance, compared to the JK covariance, allows us to use the lensing signals at large scales beyond a size of the JK subregion, which contains cleaner cosmological information in the linear regime.

  17. Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances

    NASA Astrophysics Data System (ADS)

    Vermeesch, P.

    2015-12-01

    Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking

  18. Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials

    PubMed Central

    Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.

    2013-01-01

    Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by

  19. Galaxy–galaxy lensing estimators and their covariance properties

    DOE PAGES

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...

    2017-07-21

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  20. Galaxy–galaxy lensing estimators and their covariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  1. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  2. Uncertainties and coupled error covariances in the CERA-20C, ECMWF's first coupled reanalysis ensemble

    NASA Astrophysics Data System (ADS)

    Feng, Xiangbo; Haines, Keith

    2017-04-01

    ECMWF has produced its first ensemble ocean-atmosphere coupled reanalysis, the 20th century Coupled ECMWF ReAnalysis (CERA-20C), with 10 ensemble members at 3-hour resolution. Here the analysis uncertainties (ensemble spread) of lower atmospheric variables and sea surface temperature (SST), and their correlations, are quantified on diurnal, seasonal and longer timescales. The 2-m air temperature (T2m) spread is always larger than the SST spread at high-frequencies, but smaller on monthly timescales, except in deep convection areas, indicating increasing SST control at longer timescales. Spatially the T2m-SST ensemble correlations are the strongest where ocean mixed layers are shallow and can respond to atmospheric variability. Where atmospheric convection is strong with a deep precipitating boundary layer, T2m-SST correlations are greatly reduced. As the 20th-century progresses more observations become available, and ensemble spreads decline at all variability timescales. The T2m-SST correlations increase through the 20th-century, except in the tropics. As winds become better constrained over the oceans with less spread, T2m-SST become more correlated. In the tropics, strong ENSO-related inter-annual variability is found in the correlations, as atmospheric convection centres move. These ensemble spreads have been used to provide background errors for the assimilation throughout the reanalysis, have implications for the weights given to observations, and are a general measure of the uncertainties in the analysed product. Although cross boundary covariances are not currently used, they offer considerable potential for strengthening the ocean-atmosphere coupling in future reanalyses.

  3. Covariance NMR Processing and Analysis for Protein Assignment.

    PubMed

    Harden, Bradley J; Frueh, Dominique P

    2018-01-01

    During NMR resonance assignment it is often necessary to relate nuclei to one another indirectly, through their common correlations to other nuclei. Covariance NMR has emerged as a powerful technique to correlate such nuclei without relying on error-prone peak peaking. However, false-positive artifacts in covariance spectra have impeded a general application to proteins. We recently introduced pre- and postprocessing steps to reduce the prevalence of artifacts in covariance spectra, allowing for the calculation of a variety of 4D covariance maps obtained from diverse combinations of pairs of 3D spectra, and we have employed them to assign backbone and sidechain resonances in two large and challenging proteins. In this chapter, we present a detailed protocol describing how to (1) properly prepare existing 3D spectra for covariance, (2) understand and apply our processing script, and (3) navigate and interpret the resulting 4D spectra. We also provide solutions to a number of errors that may occur when using our script, and we offer practical advice when assigning difficult signals. We believe such 4D spectra, and covariance NMR in general, can play an integral role in the assignment of NMR signals.

  4. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  5. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    NASA Astrophysics Data System (ADS)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  6. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  7. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  8. Covariance analysis for evaluating head trackers

    NASA Astrophysics Data System (ADS)

    Kang, Donghoon

    2017-10-01

    Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.

  9. Ocean Spectral Data Assimilation Without Background Error Covariance Matrix

    DTIC Science & Technology

    2016-01-01

    float data (Chu et al. 2007 ), and 97 temporal and spatial variability of the global upper ocean heat content (Chu 2011) from the data 98 of the Global...Melnichenko OV, Wells NC ( 2007 ) Long baroclinic Rossby waves in the 558 tropical North Atlantic observed from profiling floats. J Geophys Res...Hall, J, Harrison D.E. and Stammer , D., Eds., ESA Publication WPP-610 306. 611 612 Tang Y, Kleeman R (2004) SST assimilation experiments in a

  10. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    PubMed

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  11. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data.

    PubMed

    Lobach, Iryna; Mallick, Bani; Carroll, Raymond J

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.

  12. 1/2-BPS D-branes from covariant open superstring in AdS4 × CP3 background

    NASA Astrophysics Data System (ADS)

    Park, Jaemo; Shin, Hyeonjoon

    2018-05-01

    We consider the open superstring action in the AdS4 × CP 3 background and investigate the suitable boundary conditions for the open superstring describing the 1/2-BPS D-branes by imposing the κ-symmetry of the action. This results in the classification of 1/2-BPS D-branes from covariant open superstring. It is shown that the 1/2-BPS D-brane configurations are restricted considerably by the Kähler structure on CP 3. We just consider D-branes without worldvolume fluxes.

  13. Measuring continuous baseline covariate imbalances in clinical trial data

    PubMed Central

    Ciolino, Jody D.; Martin, Renee’ H.; Zhao, Wenle; Hill, Michael D.; Jauch, Edward C.; Palesch, Yuko Y.

    2014-01-01

    This paper presents and compares several methods of measuring continuous baseline covariate imbalance in clinical trial data. Simulations illustrate that though the t-test is an inappropriate method of assessing continuous baseline covariate imbalance, the test statistic itself is a robust measure in capturing imbalance in continuous covariate distributions. Guidelines to assess effects of imbalance on bias, type I error rate, and power for hypothesis test for treatment effect on continuous outcomes are presented, and the benefit of covariate-adjusted analysis (ANCOVA) is also illustrated. PMID:21865270

  14. Corrected score estimation in the proportional hazards model with misclassified discrete covariates

    PubMed Central

    Zucker, David M.; Spiegelman, Donna

    2013-01-01

    SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700

  15. Using aggregate data to estimate the standard error of a treatment-covariate interaction in an individual patient data meta-analysis.

    PubMed

    Kovalchik, Stephanie A; Cumberland, William G

    2012-05-01

    Subgroup analyses are important to medical research because they shed light on the heterogeneity of treatment effectts. A treatment-covariate interaction in an individual patient data (IPD) meta-analysis is the most reliable means to estimate how a subgroup factor modifies a treatment's effectiveness. However, owing to the challenges in collecting participant data, an approach based on aggregate data might be the only option. In these circumstances, it would be useful to assess the relative efficiency and power loss of a subgroup analysis without patient-level data. We present methods that use aggregate data to estimate the standard error of an IPD meta-analysis' treatment-covariate interaction for regression models of a continuous or dichotomous patient outcome. Numerical studies indicate that the estimators have good accuracy. An application to a previously published meta-regression illustrates the practical utility of the methodology. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation

    NASA Astrophysics Data System (ADS)

    Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.

    2018-01-01

    Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.

  17. UDU/T/ covariance factorization for Kalman filtering

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1980-01-01

    There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.

  18. Examination of various roles for covariance matrices in the development, evaluation, and application of nuclear data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D.L.

    The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurementmore » standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs.« less

  19. Scale covariance and G-varying cosmology. II - Thermodynamics, radiation, and the 3 K background

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Hsieh, S.-H.

    1979-01-01

    Within the framework of a scale-covariant theory of gravitation, a semiclassical description of particles and photons is given. Thermodynamic relations consistent with the modified conservation equations are derived. Application to a system of radiation shows that the observed 3-K background radiation can be interpreted, within the present framework, as a remnant of equilibrium radiation in the past. As the theory postulates a nonstandard coupling between gravitation and electrodynamics, the assumption that Einstein's theory of gravitation is unchanged forces modifications at the atomic level. The use of Minkowskian spacetime in atomic physics is found to be adequate only over small, but not large, time scales compared with the age of the universe. As a result, a relation between energy and the frequency of a free photon is demonstrated. Possible observational consequences of this relation are discussed.

  20. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  1. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  2. Dynamic Tasking of Networked Sensors Using Covariance Information

    DTIC Science & Technology

    2010-09-01

    has been created under an effort called TASMAN (Tasking Autonomous Sensors in a Multiple Application Network). One of the first studies utilizing this...environment was focused on a novel resource management approach, namely covariance-based tasking. Under this scheme, the state error covariance of...resident space objects (RSO), sensor characteristics, and sensor- target geometry were used to determine the effectiveness of future observations in

  3. Adult myeloid leukaemia and radon exposure: a Bayesian model for a case-control study with error in covariates.

    PubMed

    Toti, Simona; Biggeri, Annibale; Forastiere, Francesco

    2005-06-30

    The possible association between radon exposure in dwellings and adult myeloid leukaemia had been explored in an Italian province by a case-control study. A total of 44 cases and 211 controls were selected from death certificates file. No association had been found in the original study (OR = 0.58 for > 185 vs 80 < or = Bq/cm). Here we reanalyse the data taking into account the measurement error of radon concentration and the presence of missing data. A Bayesian hierarchical model with error in covariates is proposed which allows appropriate imputation of missing values. The general conclusion of no evidence of association with radon does not change, but a negative association is not observed anymore (OR = 0.99 for > 185 vs 80 < or = Bq/cm). After adjusting for residential house radon and gamma radiation, and for the multilevel data structure, geological features of the soil is associated with adult myeloid leukaemia risk (OR = 2.14, 95 per cent Cr.I. 1.0-5.5). Copyright 2005 John Wiley & Sons, Ltd.

  4. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  5. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  6. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  7. Implementation of a flow-dependent background error correlation length scale formulation in the NEMOVAR OSTIA system

    NASA Astrophysics Data System (ADS)

    Fiedler, Emma; Mao, Chongyuan; Good, Simon; Waters, Jennifer; Martin, Matthew

    2017-04-01

    OSTIA is the Met Office's Operational Sea Surface Temperature (SST) and Ice Analysis system, which produces L4 (globally complete, gridded) analyses on a daily basis. Work is currently being undertaken to replace the original OI (Optimal Interpolation) data assimilation scheme with NEMOVAR, a 3D-Var data assimilation method developed for use with the NEMO ocean model. A dual background error correlation length scale formulation is used for SST in OSTIA, as implemented in NEMOVAR. Short and long length scales are combined according to the ratio of the decomposition of the background error variances into short and long spatial correlations. The pre-defined background error variances vary spatially and seasonally, but not on shorter time-scales. If the derived length scales applied to the daily analysis are too long, SST features may be smoothed out. Therefore a flow-dependent component to determining the effective length scale has also been developed. The total horizontal gradient of the background SST field is used to identify regions where the length scale should be shortened. These methods together have led to an improvement in the resolution of SST features compared to the previous OI analysis system, without the introduction of spurious noise. This presentation will show validation results for feature resolution in OSTIA using the OI scheme, the dual length scale NEMOVAR scheme, and the flow-dependent implementation.

  8. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  9. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  10. On the use of the covariance matrix to fit correlated data

    NASA Astrophysics Data System (ADS)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  11. On the regularity of the covariance matrix of a discretized scalar field on the sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilbao-Ahedo, J.D.; Barreiro, R.B.; Herranz, D.

    2017-02-01

    We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizationsmore » that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.« less

  12. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  13. Constant covariance in local vertical coordinates for near-circular orbits

    NASA Technical Reports Server (NTRS)

    Shepperd, Stanley W.

    1991-01-01

    A method is presented for devising a covariance matrix that either remains constant or grows in keeping with the presence of a period error in a rotating local-vertical coordinate system. The solution presented may prove useful in the initialization of simulation covariance matrices for near-circular-orbit problems. Use is made of the Clohessy-Wiltshire equations and the travelling-ellipse formulation.

  14. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  15. Variations of cosmic large-scale structure covariance matrices across parameter space

    NASA Astrophysics Data System (ADS)

    Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte

    2017-03-01

    The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.

  16. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    PubMed

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  17. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  18. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  19. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  20. Kernel Equating Under the Non-Equivalent Groups With Covariates Design

    PubMed Central

    Bränberg, Kenny

    2015-01-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests. PMID:29881012

  1. Kernel Equating Under the Non-Equivalent Groups With Covariates Design.

    PubMed

    Wiberg, Marie; Bränberg, Kenny

    2015-07-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests.

  2. Comparison of bias-corrected covariance estimators for MMRM analysis in longitudinal data with dropouts.

    PubMed

    Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori

    2017-10-01

    In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications

  3. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  4. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Problems with small area surveys: lensing covariance of supernova distance measurements.

    PubMed

    Cooray, Asantha; Huterer, Dragan; Holz, Daniel E

    2006-01-20

    While luminosity distances from type Ia supernovae (SNe) are a powerful probe of cosmology, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass leads to correlated errors in SNe distances. By including the full covariance matrix of SNe, we show that future wide-field surveys will remain largely unaffected by lensing correlations. However, "pencil beam" surveys, and those with narrow (but possibly long) fields of view, can be strongly affected. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a approximately 45% increase in the expected errors in dark energy parameters.

  6. The impact of different background errors in the assimilation of satellite radiances and in-situ observational data using WRFDA for three rainfall events over Iran

    NASA Astrophysics Data System (ADS)

    Zakeri, Zeinab; Azadi, Majid; Ghader, Sarmad

    2018-01-01

    Satellite radiances and in-situ observations are assimilated through Weather Research and Forecasting Data Assimilation (WRFDA) system into Advanced Research WRF (ARW) model over Iran and its neighboring area. Domain specific background error based on x and y components of wind speed (UV) control variables is calculated for WRFDA system and some sensitivity experiments are carried out to compare the impact of global background error and the domain specific background errors, both on the precipitation and 2-m temperature forecasts over Iran. Three precipitation events that occurred over the country during January, September and October 2014 are simulated in three different experiments and the results for precipitation and 2-m temperature are verified against the verifying surface observations. Results show that using domain specific background error improves 2-m temperature and 24-h accumulated precipitation forecasts consistently, while global background error may even degrade the forecasts compared to the experiments without data assimilation. The improvement in 2-m temperature is more evident during the first forecast hours and decreases significantly as the forecast length increases.

  7. Analysis of Covariance: Is It the Appropriate Model to Study Change?

    ERIC Educational Resources Information Center

    Marston, Paul T., Borich, Gary D.

    The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…

  8. Quantifying Carbon Flux Estimation Errors

    NASA Astrophysics Data System (ADS)

    Wesloh, D.

    2017-12-01

    Atmospheric Bayesian inversions have been used to estimate surface carbon dioxide (CO2) fluxes from global to sub-continental scales using atmospheric mixing ratio measurements. These inversions use an atmospheric transport model, coupled to a set of fluxes, in order to simulate mixing ratios that can then be compared to the observations. The comparison is then used to update the fluxes to better match the observations in a manner consistent with the uncertainties prescribed for each. However, inversion studies disagree with each other at continental scales, prompting further investigations to examine the causes of these differences. Inter-comparison studies have shown that the errors resulting from atmospheric transport inaccuracies are comparable to those from the errors in the prior fluxes. However, not as much effort has gone into studying the origins of the errors induced by errors in the transport as by errors in the prior distribution. This study uses a mesoscale transport model to evaluate the effects of representation errors in the observations and of incorrect descriptions of the transport. To obtain realizations of these errors, we performed an Observing System Simulation Experiments (OSSEs), with the transport model used for the inversion operating at two resolutions, one typical of a global inversion and the other of a mesoscale, and with various prior flux distributions to. Transport error covariances are inferred from an ensemble of perturbed mesoscale simulations while flux error covariances are computed using prescribed distributions and magnitudes. We examine how these errors can be diagnosed in the inversion process using aircraft, ground-based, and satellite observations of meteorological variables and CO2.

  9. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    PubMed

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  10. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  11. Refractive errors in students from Middle Eastern backgrounds living and undertaking schooling in Australia.

    PubMed

    Azizoglu, Serap; Junghans, Barbara M; Barutchu, Ayla; Crewther, Sheila G

    2011-01-01

      Environmental factors associated with schooling systems in various countries have been implicated in the rising prevalence of myopia, making the comparison of prevalence of refractive errors in migrant populations of interest. This study aims to determine the prevalence of refractive errors in children of Middle Eastern descent, raised and living in urban Australia but actively maintaining strong ties to their ethnic culture, and to compare them with those in the Middle East where myopia prevalence is generally low.   A total of 354 out of a possible 384 late primary/early secondary schoolchildren attending a private school attracting children of Middle Eastern background in Melbourne were assessed for refractive error and visual acuity. A Shin Nippon open-field NVision-K5001 autorefractor was used to carry out non-cycloplegic autorefraction while viewing a distant target. For statistical analyses students were divided into three age groups: 10-11 years (n = 93); 12-13 years (n = 158); and 14-15 years (n = 102).   All children were bilingual and classified as of Middle Eastern (96.3 per cent) or Egyptian (3.7 per cent) origin. Ages ranged from 10 to 15 years, with a mean of 13.17 ± 0.8 (SEM) years. Mean spherical equivalent refraction (SER) for the right eye was +0.09 ± 0.07 D (SEM) with a range from -7.77 D to +5.85 D. The prevalence of myopia, defined as a spherical equivalent refraction 0.50 D or more of myopia, was 14.7 per cent. The prevalence of hyperopia, defined as a spherical equivalent refraction of +0.75 D or greater, was 16.4 per cent, while hyperopia of +1.50 D or greater was 5.4 per cent. A significant difference in SER was seen as a function of age; however, no significant gender difference was seen.   This is the first study to report the prevalence of refractive errors for second-generation Australian schoolchildren coming from a predominantly Lebanese Middle Eastern Arabic background, who endeavour to maintain their ethnic ties. The

  12. Genetic variation and co-variation for fitness between intra-population and inter-population backgrounds in the red flour beetle, Tribolium castaneum

    PubMed Central

    Drury, Douglas W.; Wade, Michael J.

    2010-01-01

    Hybrids from crosses between populations of the flour beetle, Tribolium castaneum, express varying degrees of inviability and morphological abnormalities. The proportion of allopatric population hybrids exhibiting these negative hybrid phenotypes varies widely, from 3% to 100%, depending upon the pair of populations crossed. We crossed three populations and measured two fitness components, fertility and adult offspring numbers from successful crosses, to determine how genes segregating within populations interact in inter-population hybrids to cause the negative phenotypes. With data from crosses of 40 sires from each of three populations to groups of 5 dams from their own and two divergent populations, we estimated the genetic variance and covariance for breeding value of fitness between the intra- and inter-population backgrounds and the sire × dam-population interaction variance. The latter component of the variance in breeding values estimates the change in genic effects between backgrounds owing to epistasis. Interacting genes with a positive effect, prior to fixation, in the sympatric background but a negative effect in the hybrid background cause reproductive incompatibility in the Dobzhansky-Muller speciation model. Thus, the sire × dam-population interaction provides a way to measure the progress toward speciation of genetically differentiating populations on a trait by trait basis using inter-population hybrids. PMID:21044199

  13. Modeling uncertainty of evapotranspiration measurements from multiple eddy covariance towers over a crop canopy

    USDA-ARS?s Scientific Manuscript database

    All measurements have random error associated with them. With fluxes in an eddy covariance system, measurement error can been modelled in several ways, often involving a statistical description of turbulence at its core. Using a field experiment with four towers, we generated four replicates of meas...

  14. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  15. Covariant relativistic hydrodynamics of multispecies plasma and generalized Ohm's law

    NASA Astrophysics Data System (ADS)

    Gedalin, Michael

    1996-04-01

    Fully covariant hydrodynamical equations for a multispecies relativistic plasma in an external electromagnetic field are derived. The derived multifluid description takes into account binary Coulomb collisions, annihilation, and interaction with the photon background in terms of the invariant collision cross sections. A generalized Ohm's law is derived in a manifestly covariant form. Particular attention is devoted to the relativistic electron-positron plasma.

  16. Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.

    PubMed

    George, Brandon; Aban, Inmaculada

    2015-01-15

    Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data

    PubMed Central

    George, Brandon; Aban, Inmaculada

    2014-01-01

    Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361

  18. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  19. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  20. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-10-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal

  1. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-03-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal

  2. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  3. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  4. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only

  5. Inadequacy of internal covariance estimation for super-sample covariance

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Kunz, Martin

    2017-08-01

    We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.

  6. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  7. The spatiotemporal MEG covariance matrix modeled as a sum of Kronecker products.

    PubMed

    Bijma, Fetsje; de Munck, Jan C; Heethaar, Rob M

    2005-08-15

    The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for describing multiple, independent phenomena in the ongoing background activity. Whereas the single KP model can be interpreted by assuming that background activity is generated by randomly distributed dipoles with certain spatial and temporal characteristics, the sum model can be physiologically interpreted by assuming a composite of such processes. Taking enough terms into account, the spatiotemporal sample covariance matrix can be described exactly by this extended model. In the estimation of the sum of KP model, it appears that the sum of the first 2 KP describes between 67% and 93%. Moreover, these first two terms describe two physiological processes in the background activity: focal, frequency-specific alpha activity, and more widespread non-frequency-specific activity. Furthermore, temporal nonstationarities due to trial-to-trial variations are not clearly visible in the first two terms, and, hence, play only a minor role in the sample covariance matrix in terms of matrix power. Considering the dipole localization, the single KP model appears to describe around 80% of the noise and seems therefore adequate. The emphasis of further improvement of localization accuracy should be on improving the source model rather than the covariance model.

  8. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  9. Bispectrum supersample covariance

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Moradinezhad Dizgah, Azadeh; Noreña, Jorge

    2018-02-01

    Modes with wavelengths larger than the survey window can have significant impact on the covariance within the survey window. The supersample covariance has been recognized as an important source of covariance for the power spectrum on small scales, and it can potentially be important for the bispectrum covariance as well. In this paper, using the response function formalism, we model the supersample covariance contributions to the bispectrum covariance and the cross-covariance between the power spectrum and the bispectrum. The supersample covariances due to the long-wavelength density and tidal perturbations are investigated, and the tidal contribution is a few orders of magnitude smaller than the density one because in configuration space the bispectrum estimator involves angular averaging and the tidal response function is anisotropic. The impact of the super-survey modes is quantified using numerical measurements with periodic box and sub-box setups. For the matter bispectrum, the ratio between the supersample covariance correction and the small-scale covariance—which can be computed using a periodic box—is roughly an order of magnitude smaller than that for the matter power spectrum. This is because for the bispectrum, the small-scale non-Gaussian covariance is significantly larger than that for the power spectrum. For the cross-covariance, the supersample covariance is as important as for the power spectrum covariance. The supersample covariance prediction with the halo model response function is in good agreement with numerical results.

  10. Real-time probabilistic covariance tracking with efficient model update.

    PubMed

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  11. OD Covariance in Conjunction Assessment: Introduction and Issues

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.; Duncan, M.

    2015-01-01

    Primary and secondary covariances combined and projected into conjunction plane (plane perpendicular to relative velocity vector at TCA) Primary placed on x-axis at (miss distance, 0) and represented by circle of radius equal to sum of both spacecraft circumscribing radiiZ-axis perpendicular to x-axis in conjunction plane Pc is portion of combined error ellipsoid that falls within the hard-body radius circle

  12. Short-time windowed covariance: A metric for identifying non-stationary, event-related covariant cortical sites

    PubMed Central

    Blakely, Timothy; Ojemann, Jeffrey G.; Rao, Rajesh P.N.

    2014-01-01

    Background Electrocorticography (ECoG) signals can provide high spatio-temporal resolution and high signal to noise ratio recordings of local neural activity from the surface of the brain. Previous studies have shown that broad-band, spatially focal, high-frequency increases in ECoG signals are highly correlated with movement and other cognitive tasks and can be volitionally modulated. However, significant additional information may be present in inter-electrode interactions, but adding additional higher order inter-electrode interactions can be impractical from a computational aspect, if not impossible. New method In this paper we present a new method of calculating high frequency interactions between electrodes called Short-Time Windowed Covariance (STWC) that builds on mathematical techniques currently used in neural signal analysis, along with an implementation that accelerates the algorithm by orders of magnitude by leveraging commodity, off-the-shelf graphics processing unit (GPU) hardware. Results Using the hardware-accelerated implementation of STWC, we identify many types of event-related inter-electrode interactions from human ECoG recordings on global and local scales that have not been identified by previous methods. Unique temporal patterns are observed for digit flexion in both low- (10 mm spacing) and high-resolution (3 mm spacing) electrode arrays. Comparison with existing methods Covariance is a commonly used metric for identifying correlated signals, but the standard covariance calculations do not allow for temporally varying covariance. In contrast STWC allows and identifies event-driven changes in covariance without identifying spurious noise correlations. Conclusions: STWC can be used to identify event-related neural interactions whose high computational load is well suited to GPU capabilities. PMID:24211499

  13. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  14. GRAVSAT/GEOPAUSE covariance analysis including geopotential aliasing

    NASA Technical Reports Server (NTRS)

    Koch, D. W.

    1975-01-01

    A conventional covariance analysis for the GRAVSAT/GEOPAUSE mission is described in which the uncertainties of approximately 200 parameters, including the geopotential coefficients to degree and order 12, are estimated over three different tracking intervals. The estimated orbital uncertainties for both GRAVSAT and GEOPAUSE reach levels more accurate than presently available. The adjusted measurement bias errors approach the mission goal. Survey errors in the low centimeter range are achieved after ten days of tracking. The ability of the mission to obtain accuracies of geopotential terms to (12, 12) one to two orders of magnitude superior to present accuracy levels is clearly shown. A unique feature of this report is that the aliasing structure of this (12, 12) field is examined. It is shown that uncertainties for unadjusted terms to (12, 12) still exert a degrading effect upon the adjusted error of an arbitrarily selected term of lower degree and order. Finally, the distribution of the aliasing from the unestimated uncertainty of a particular high degree and order geopotential term upon the errors of all remaining adjusted terms is listed in detail.

  15. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  16. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  17. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  18. Non-perturbative background field calculations

    NASA Astrophysics Data System (ADS)

    Stephens, C. R.

    1988-01-01

    New methods are developed for calculating one loop functional determinants in quantum field theory. Instead of relying on a calculation of all the eigenvalues of the small fluctuation equation, these techniques exploit the ability of the proper time formalism to reformulate an infinite dimensional field theoretic problem into a finite dimensional covariant quantum mechanical analog, thereby allowing powerful tools such as the method of Jacobi fields to be used advantageously in a field theory setting. More generally the methods developed herein should be extremely valuable when calculating quantum processes in non-constant background fields, offering a utilitarian alternative to the two standard methods of calculation—perturbation theory in the background field or taking the background field into account exactly. The formalism developed also allows for the approximate calculation of covariances of partial differential equations from a knowledge of the solutions of a homogeneous ordinary differential equation.

  19. Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors

    NASA Astrophysics Data System (ADS)

    Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.

    2007-12-01

    Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.

  20. The GEOS Ozone Data Assimilation System: Specification of Error Statistics

    NASA Technical Reports Server (NTRS)

    Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.

    2000-01-01

    A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.

  1. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  2. Multilevel Multidimensional Item Response Model with a Multilevel Latent Covariate

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Bottge, Brian A.

    2015-01-01

    In a pretest-posttest cluster-randomized trial, one of the methods commonly used to detect an intervention effect involves controlling pre-test scores and other related covariates while estimating an intervention effect at post-test. In many applications in education, the total post-test and pre-test scores that ignores measurement error in the…

  3. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  4. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  5. Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Ghrist, Richard W.; Plakalovic, Dragan

    2012-01-01

    An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.

  6. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  7. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  8. Coincidence and covariance data acquisition in photoelectron and -ion spectroscopy. II. Analysis and applications

    NASA Astrophysics Data System (ADS)

    Mikosch, Jochen; Patchkovskii, Serguei

    2013-10-01

    We use an analytical theory of noisy Poisson processes, developed in the preceding companion publication, to compare coincidence and covariance measurement approaches in photoelectron and -ion spectroscopy. For non-unit detection efficiencies, coincidence data acquisition (DAQ) suffers from false coincidences. The rate of false coincidences grows quadratically with the rate of elementary ionization events. To minimize false coincidences for rare event outcomes, very low event rates may hence be required. Coincidence measurements exhibit high tolerance to noise introduced by unstable experimental conditions. Covariance DAQ on the other hand is free of systematic errors as long as stable experimental conditions are maintained. In the presence of noise, all channels in a covariance measurement become correlated. Under favourable conditions, covariance DAQ may allow orders of magnitude reduction in measurement times. Finally, we use experimental data for strong-field ionization of 1,3-butadiene to illustrate how fluctuations in experimental conditions can contaminate a covariance measurement, and how such contamination can be detected.

  9. Adaptive framework to better characterize errors of apriori fluxes and observational residuals in a Bayesian setup for the urban flux inversions.

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.

    2017-12-01

    The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We

  10. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study.

    PubMed

    Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

    2015-01-01

    Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

  11. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  12. Covariant electrodynamics in linear media: Optical metric

    NASA Astrophysics Data System (ADS)

    Thompson, Robert T.

    2018-03-01

    While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.

  13. An improved error assessment for the GEM-T1 gravitational model

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.

    1988-01-01

    Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.

  14. Eddy Covariance Method: Overview of General Guidelines and Conventional Workflow

    NASA Astrophysics Data System (ADS)

    Burba, G. G.; Anderson, D. J.; Amen, J. L.

    2007-12-01

    received from new users of the Eddy Covariance method and relevant instrumentation, and employs non-technical language to be of practical use to those new to this field. Information is provided on theory of the method (including state of methodology, basic derivations, practical formulations, major assumptions and sources of errors, error treatment, and use in non- traditional terrains), practical workflow (e.g., experimental design, implementation, data processing, and quality control), alternative methods and applications, and the most frequently overlooked details of the measurements. References and access to an extended 141-page Eddy Covariance Guideline in three electronic formats are also provided.

  15. DISSCO: direct imputation of summary statistics allowing covariates

    PubMed Central

    Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun

    2015-01-01

    Background: Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. Methods: We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). Results: We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9–15.2% for variants with minor allele frequency <5%. Availability and implementation: http://www.unc.edu/∼yunmli/DISSCO. Contact: yunli

  16. A stochastic multiple imputation algorithm for missing covariate data in tree-structured survival analysis.

    PubMed

    Wallace, Meredith L; Anderson, Stewart J; Mazumdar, Sati

    2010-12-20

    Missing covariate data present a challenge to tree-structured methodology due to the fact that a single tree model, as opposed to an estimated parameter value, may be desired for use in a clinical setting. To address this problem, we suggest a multiple imputation algorithm that adds draws of stochastic error to a tree-based single imputation method presented by Conversano and Siciliano (Technical Report, University of Naples, 2003). Unlike previously proposed techniques for accommodating missing covariate data in tree-structured analyses, our methodology allows the modeling of complex and nonlinear covariate structures while still resulting in a single tree model. We perform a simulation study to evaluate our stochastic multiple imputation algorithm when covariate data are missing at random and compare it to other currently used methods. Our algorithm is advantageous for identifying the true underlying covariate structure when complex data and larger percentages of missing covariate observations are present. It is competitive with other current methods with respect to prediction accuracy. To illustrate our algorithm, we create a tree-structured survival model for predicting time to treatment response in older, depressed adults. Copyright © 2010 John Wiley & Sons, Ltd.

  17. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  18. Functional mapping of reaction norms to multiple environmental signals through nonparametric covariance estimation

    PubMed Central

    2011-01-01

    Background The identification of genes or quantitative trait loci that are expressed in response to different environmental factors such as temperature and light, through functional mapping, critically relies on precise modeling of the covariance structure. Previous work used separable parametric covariance structures, such as a Kronecker product of autoregressive one [AR(1)] matrices, that do not account for interaction effects of different environmental factors. Results We implement a more robust nonparametric covariance estimator to model these interactions within the framework of functional mapping of reaction norms to two signals. Our results from Monte Carlo simulations show that this estimator can be useful in modeling interactions that exist between two environmental signals. The interactions are simulated using nonseparable covariance models with spatio-temporal structural forms that mimic interaction effects. Conclusions The nonparametric covariance estimator has an advantage over separable parametric covariance estimators in the detection of QTL location, thus extending the breadth of use of functional mapping in practical settings. PMID:21269481

  19. Signs of depth-luminance covariance in 3-D cluttered scenes.

    PubMed

    Scaccia, Milena; Langer, Michael S

    2018-03-01

    In three-dimensional (3-D) cluttered scenes such as foliage, deeper surfaces often are more shadowed and hence darker, and so depth and luminance often have negative covariance. We examined whether the sign of depth-luminance covariance plays a role in depth perception in 3-D clutter. We compared scenes rendered with negative and positive depth-luminance covariance where positive covariance means that deeper surfaces are brighter and negative covariance means deeper surfaces are darker. For each scene, the sign of the depth-luminance covariance was given by occlusion cues. We tested whether subjects could use this sign information to judge the depth order of two target surfaces embedded in 3-D clutter. The clutter consisted of distractor surfaces that were randomly distributed in a 3-D volume. We tested three independent variables: the sign of the depth-luminance covariance, the colors of the targets and distractors, and the background luminance. An analysis of variance showed two main effects: Subjects performed better when the deeper surfaces were darker and when the color of the target surfaces was the same as the color of the distractors. There was also a strong interaction: Subjects performed better under a negative depth-luminance covariance condition when targets and distractors had different colors than when they had the same color. Our results are consistent with a "dark means deep" rule, but the use of this rule depends on the similarity between the color of the targets and color of the 3-D clutter.

  20. Parameter constraints from weak-lensing tomography of galaxy shapes and cosmic microwave background fluctuations

    NASA Astrophysics Data System (ADS)

    Merkel, Philipp M.; Schäfer, Björn Malte

    2017-08-01

    Recently, it has been shown that cross-correlating cosmic microwave background (CMB) lensing and three-dimensional (3D) cosmic shear allows to considerably tighten cosmological parameter constraints. We investigate whether similar improvement can be achieved in a conventional tomographic setup. We present Fisher parameter forecasts for a Euclid-like galaxy survey in combination with different ongoing and forthcoming CMB experiments. In contrast to a fully 3D analysis, we find only marginal improvement. Assuming Planck-like CMB data, we show that including the full covariance of the combined CMB and cosmic shear data improves the dark energy figure of merit (FOM) by only 3 per cent. The marginalized error on the sum of neutrino masses is reduced at the same level. For a next generation CMB satellite mission such as Prism, the predicted improvement of the dark energy FOM amounts to approximately 25 per cent. Furthermore, we show that the small improvement is contrasted by an increased bias in the dark energy parameters when the intrinsic alignment of galaxies is not correctly accounted for in the full covariance matrix.

  1. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  2. Super-sample covariance approximations and partial sky coverage

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Lima, Marcos; Aguena, Michel

    2018-04-01

    Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30-35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.

  3. Phenotypic covariance at species’ borders

    PubMed Central

    2013-01-01

    Background Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species’ borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Results Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Conclusions Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species’ borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future. PMID:23714580

  4. A class of covariate-dependent spatiotemporal covariance functions

    PubMed Central

    Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.

    2014-01-01

    In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199

  5. An algorithm for propagating the square-root covariance matrix in triangular form

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Choe, C. Y.

    1976-01-01

    A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.

  6. Evaluation of subset matching methods and forms of covariate balance.

    PubMed

    de Los Angeles Resa, María; Zubizarreta, José R

    2016-11-30

    This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. New method for propagating the square root covariance matrix in triangular form. [using Kalman-Bucy filter

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.; Tapley, B. D.

    1975-01-01

    A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.

  8. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    PubMed

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization

    PubMed Central

    Brier, Matthew R.; Mitra, Anish; McCarthy, John E.; Ances, Beau M.; Snyder, Abraham Z.

    2015-01-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. PMID:26208872

  10. Survival analysis with functional covariates for partial follow-up studies.

    PubMed

    Fang, Hong-Bin; Wu, Tong Tong; Rapoport, Aaron P; Tan, Ming

    2016-12-01

    Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of

  11. Covariate-free and Covariate-dependent Reliability.

    PubMed

    Bentler, Peter M

    2016-12-01

    Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.

  12. Survival analysis with error-prone time-varying covariates: a risk set calibration approach

    PubMed Central

    Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna

    2010-01-01

    Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928

  13. Noisy covariance matrices and portfolio optimization II

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2003-03-01

    Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the

  14. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  15. Consistent compactification of double field theory on non-geometric flux backgrounds

    NASA Astrophysics Data System (ADS)

    Hassler, Falk; Lüst, Dieter

    2014-05-01

    In this paper, we construct non-trivial solutions to the 2 D-dimensional field equations of Double Field Theory (DFT) by using a consistent Scherk-Schwarz ansatz. The ansatz identifies 2( D - d) internal directions with a twist U M N which is directly connected to the covariant fluxes ABC . It exhibits 2( D - d) linear independent generalized Killing vectors K I J and gives rise to a gauged supergravity in d dimensions. We analyze the covariant fluxes and the corresponding gauged supergravity with a Minkowski vacuum. We calculate fluctuations around such vacua and show how they gives rise to massive scalars field and vectors field with a non-abelian gauge algebra. Because DFT is a background independent theory, these fields should directly correspond the string excitations in the corresponding background. For ( D - d) = 3 we perform a complete scan of all allowed covariant fluxes and find two different kinds of backgrounds: the single and the double elliptic case. The later is not T-dual to a geometric background and cannot be transformed to a geometric setting by a field redefinition either. While this background fulfills the strong constraint, it is still consistent with the Killing vectors depending on the coordinates and the winding coordinates, thereby giving a non-geometric patching. This background can therefore not be described in Supergravity or Generalized Geometry.

  16. Using indirect covariance spectra to identify artifact responses in unsymmetrical indirect covariance calculated spectra.

    PubMed

    Martin, Gary E; Hilton, Bruce D; Blinov, Kirill A; Williams, Antony J

    2008-02-01

    Several groups of authors have reported studies in the areas of indirect and unsymmetrical indirect covariance NMR processing methods. Efforts have recently focused on the use of unsymmetrical indirect covariance processing methods to combine various discrete two-dimensional NMR spectra to afford the equivalent of the much less sensitive hyphenated 2D NMR experiments, for example indirect covariance (icv)-heteronuclear single quantum coherence (HSQC)-COSY and icv-HSQC-nuclear Overhauser effect spectroscopy (NOESY). Alternatively, unsymmetrical indirect covariance processing methods can be used to combine multiple heteronuclear 2D spectra to afford icv-13C-15N HSQC-HMBC correlation spectra. We now report the use of responses contained in indirect covariance processed HSQC spectra as a means for the identification of artifacts in both indirect covariance and unsymmetrical indirect covariance processed 2D NMR spectra. Copyright (c) 2007 John Wiley & Sons, Ltd.

  17. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  18. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  19. Association between split selection instability and predictive error in survival trees.

    PubMed

    Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T

    2006-01-01

    To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.

  20. Resampling-based Methods in Single and Multiple Testing for Equality of Covariance/Correlation Matrices

    PubMed Central

    Yang, Yang; DeGruttola, Victor

    2016-01-01

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584

  1. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    PubMed

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  2. Predicting the geographic distribution of a species from presence-only data subject to detection errors

    USGS Publications Warehouse

    Dorazio, Robert M.

    2012-01-01

    Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.

  3. Covariant effective action for a Galilean invariant quantum Hall system

    NASA Astrophysics Data System (ADS)

    Geracie, Michael; Prabhu, Kartik; Roberts, Matthew M.

    2016-09-01

    We construct effective field theories for gapped quantum Hall systems coupled to background geometries with local Galilean invariance i.e. Bargmann spacetimes. Along with an electromagnetic field, these backgrounds include the effects of curved Galilean spacetimes, including torsion and a gravitational field, allowing us to study charge, energy, stress and mass currents within a unified framework. A shift symmetry specific to single constituent theories constraints the effective action to couple to an effective background gauge field and spin connection that is solved for by a self-consistent equation, providing a manifestly covariant extension of Hoyos and Son's improvement terms to arbitrary order in m.

  4. Covariant effective action for a Galilean invariant quantum Hall system

    DOE PAGES

    Geracie, Michael; Prabhu, Kartik; Roberts, Matthew M.

    2016-09-16

    Here, we construct effective field theories for gapped quantum Hall systems coupled to background geometries with local Galilean invariance i.e. Bargmann spacetimes. Along with an electromagnetic field, these backgrounds include the effects of curved Galilean spacetimes, including torsion and a gravitational field, allowing us to study charge, energy, stress and mass currents within a unified framework. A shift symmetry specific to single constituent theories constraints the effective action to couple to an effective background gauge field and spin connection that is solved for by a self-consistent equation, providing a manifestly covariant extension of Hoyos and Son’s improvement terms to arbitrarymore » order in m.« less

  5. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  6. Differential Age-Related Changes in Structural Covariance Networks of Human Anterior and Posterior Hippocampus.

    PubMed

    Li, Xinwei; Li, Qiongling; Wang, Xuetong; Li, Deyu; Li, Shuyu

    2018-01-01

    The hippocampus plays an important role in memory function relying on information interaction between distributed brain areas. The hippocampus can be divided into the anterior and posterior sections with different structure and function along its long axis. The aim of this study is to investigate the effects of normal aging on the structural covariance of the anterior hippocampus (aHPC) and the posterior hippocampus (pHPC). In this study, 240 healthy subjects aged 18-89 years were selected and subdivided into young (18-23 years), middle-aged (30-58 years), and older (61-89 years) groups. The aHPC and pHPC was divided based on the location of uncal apex in the MNI space. Then, the structural covariance networks were constructed by examining their covariance in gray matter volumes with other brain regions. Finally, the influence of age on the structural covariance of these hippocampal sections was explored. We found that the aHPC and pHPC had different structural covariance patterns, but both of them were associated with the medial temporal lobe and insula. Moreover, both increased and decreased covariances were found with the aHPC but only increased covariance was found with the pHPC with age ( p < 0.05, family-wise error corrected). These decreased connections occurred within the default mode network, while the increased connectivity mainly occurred in other memory systems that differ from the hippocampus. This study reveals different age-related influence on the structural networks of the aHPC and pHPC, providing an essential insight into the mechanisms of the hippocampus in normal aging.

  7. Asteroid approach covariance analysis for the Clementine mission

    NASA Technical Reports Server (NTRS)

    Ionasescu, Rodica; Sonnabend, David

    1993-01-01

    The Clementine mission is designed to test Strategic Defense Initiative Organization (SDIO) technology, the Brilliant Pebbles and Brilliant Eyes sensors, by mapping the moon surface and flying by the asteroid Geographos. The capability of two of the instruments available on board the spacecraft, the lidar (laser radar) and the UV/Visible camera is used in the covariance analysis to obtain the spacecraft delivery uncertainties at the asteroid. These uncertainties are due primarily to asteroid ephemeris uncertainties. On board optical navigation reduces the uncertainty in the knowledge of the spacecraft position in the direction perpendicular to the incoming asymptote to a one-sigma value of under 1 km, at the closest approach distance of 100 km. The uncertainty in the knowledge of the encounter time is about 0.1 seconds for a flyby velocity of 10.85 km/s. The magnitude of these uncertainties is due largely to Center Finding Errors (CFE). These systematic errors represent the accuracy expected in locating the center of the asteroid in the optical navigation images, in the absence of a topographic model for the asteroid. The direction of the incoming asymptote cannot be estimated accurately until minutes before the asteroid flyby, and correcting for it would require autonomous navigation. Orbit determination errors dominate over maneuver execution errors, and the final delivery accuracy attained is basically the orbit determination uncertainty before the final maneuver.

  8. Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances

    PubMed Central

    Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H.

    2017-01-01

    As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser’s actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated. PMID:28661442

  9. Comparing Consider-Covariance Analysis with Sigma-Point Consider Filter and Linear-Theory Consider Filter Formulations

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E.

    2007-01-01

    Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to

  10. Systematic Error Study for ALICE charged-jet v2 Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinz, M.; Soltz, R.

    We study the treatment of systematic errors in the determination of v 2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ 2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ 2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methodsmore » are equivalent.« less

  11. Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.

    ERIC Educational Resources Information Center

    Marcoulides, George A.

    1995-01-01

    A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)

  12. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  13. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  14. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  15. Performance of Modified Test Statistics in Covariance and Correlation Structure Analysis under Conditions of Multivariate Nonnormality.

    ERIC Educational Resources Information Center

    Fouladi, Rachel T.

    2000-01-01

    Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…

  16. Quantum corrections for the cubic Galileon in the covariant language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saltas, Ippocratis D.; Vitagliano, Vincenzo, E-mail: isaltas@fc.ul.pt, E-mail: vincenzo.vitagliano@ist.utl.pt

    We present for the first time an explicit exposition of quantum corrections within the cubic Galileon theory including the effect of quantum gravity, in a background- and gauge-invariant manner, employing the field-reparametrisation approach of the covariant effective action at 1-loop. We show that the consideration of gravitational effects in combination with the non-linear derivative structure of the theory reveals new interactions at the perturbative level, which manifest themselves as higher-operators in the associated effective action, which' relevance is controlled by appropriate ratios of the cosmological vacuum and the Galileon mass scale. The significance and concept of the covariant approach inmore » this context is discussed, while all calculations are explicitly presented.« less

  17. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  18. Dark matter statistics for large galaxy catalogs: power spectra and covariance matrices

    NASA Astrophysics Data System (ADS)

    Klypin, Anatoly; Prada, Francisco

    2018-06-01

    Large-scale surveys of galaxies require accurate theoretical predictions of the dark matter clustering for thousands of mock galaxy catalogs. We demonstrate that this goal can be achieve with the new Parallel Particle-Mesh (PM) N-body code GLAM at a very low computational cost. We run ˜22, 000 simulations with ˜2 billion particles that provide ˜1% accuracy of the dark matter power spectra P(k) for wave-numbers up to k ˜ 1hMpc-1. Using this large data-set we study the power spectrum covariance matrix. In contrast to many previous analytical and numerical results, we find that the covariance matrix normalised to the power spectrum C(k, k΄)/P(k)P(k΄) has a complex structure of non-diagonal components: an upturn at small k, followed by a minimum at k ≈ 0.1 - 0.2 hMpc-1, and a maximum at k ≈ 0.5 - 0.6 hMpc-1. The normalised covariance matrix strongly evolves with redshift: C(k, k΄)∝δα(t)P(k)P(k΄), where δ is the linear growth factor and α ≈ 1 - 1.25, which indicates that the covariance matrix depends on cosmological parameters. We also show that waves longer than 1h-1Gpc have very little impact on the power spectrum and covariance matrix. This significantly reduces the computational costs and complexity of theoretical predictions: relatively small volume ˜(1h-1Gpc)3 simulations capture the necessary properties of dark matter clustering statistics. As our results also indicate, achieving ˜1% errors in the covariance matrix for k < 0.50 hMpc-1 requires a resolution better than ɛ ˜ 0.5h-1Mpc.

  19. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross

  20. Natural abundance deuterium and 18-oxygen effects on the precision of the doubly labeled water method

    NASA Technical Reports Server (NTRS)

    Horvitz, M. A.; Schoeller, D. A.

    2001-01-01

    The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.

  1. Natural abundance deuterium and 18-oxygen effects on the precision of the doubly labeled water method.

    PubMed

    Horvitz, M A; Schoeller, D A

    2001-06-01

    The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.

  2. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  3. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Orbit error characteristic and distribution of TLE using CHAMP orbit data

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-li; Xiong, Yong-qing

    2018-02-01

    Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.

  5. Covariance Bell inequalities

    NASA Astrophysics Data System (ADS)

    Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas

    2017-12-01

    We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.

  6. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  7. Structural Covariance Networks in Children with Autism or ADHD

    PubMed Central

    Romero-Garcia, R.; Mak, E.; Bullmore, E. T.; Baron-Cohen, S.

    2017-01-01

    Abstract Background While autism and attention-deficit/hyperactivity disorder (ADHD) are considered distinct conditions from a diagnostic perspective, clinically they share some phenotypic features and have high comorbidity. Regardless, most studies have focused on only one condition, with considerable heterogeneity in their results. Taking a dual-condition approach might help elucidate shared and distinct neural characteristics. Method Graph theory was used to analyse topological properties of structural covariance networks across both conditions and relative to a neurotypical (NT; n = 87) group using data from the ABIDE (autism; n = 62) and ADHD-200 datasets (ADHD; n = 69). Regional cortical thickness was used to construct the structural covariance networks. This was analysed in a theoretical framework examining potential differences in long and short-range connectivity, with a specific focus on relation between central graph measures and cortical thickness. Results We found convergence between autism and ADHD, where both conditions show an overall decrease in CT covariance with increased Euclidean distance between centroids compared with a NT population. The 2 conditions also show divergence. Namely, there is less modular overlap between the 2 conditions than there is between each condition and the NT group. The ADHD group also showed reduced cortical thickness and lower degree in hub regions than the autism group. Lastly, the ADHD group also showed reduced wiring costs compared with the autism groups. Conclusions Our results indicate a need for taking an integrated approach when considering highly comorbid conditions such as autism and ADHD. Furthermore, autism and ADHD both showed alterations in the relation between inter-regional covariance and centroid distance, where both groups show a steeper decline in covariance as a function of distance. The 2 groups also diverge on modular organization, cortical thickness of hub regions and wiring cost of the

  8. Covariance Manipulation for Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.

  9. Bayes linear covariance matrix adjustment

    NASA Astrophysics Data System (ADS)

    Wilkinson, Darren J.

    1995-12-01

    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.

  10. Comparison of Flow-Dependent and Static Error Correlation Models in the DAO Ozone Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Wargan, K.; Stajner, I.; Pawson, S.

    2003-01-01

    In a data assimilation system the forecast error covariance matrix governs the way in which the data information is spread throughout the model grid. Implementation of a correct method of assigning covariances is expected to have an impact on the analysis results. The simplest models assume that correlations are constant in time and isotropic or nearly isotropic. In such models the analysis depends on the dynamics only through assumed error standard deviations. In applications to atmospheric tracer data assimilation this may lead to inaccuracies, especially in regions with strong wind shears or high gradient of potential vorticity, as well as in areas where no data are available. In order to overcome this problem we have developed a flow-dependent covariance model that is based on short term evolution of error correlations. The presentation compares performance of a static and a flow-dependent model applied to a global three- dimensional ozone data assimilation system developed at NASA s Data Assimilation Office. We will present some results of validation against WMO balloon-borne sondes and the Polar Ozone and Aerosol Measurement (POAM) III instrument. Experiments show that allowing forecast error correlations to evolve with the flow results in positive impact on assimilated ozone within the regions where data were not assimilated, particularly at high latitudes in both hemispheres and in the troposphere. We will also discuss statistical characteristics of both models; in particular we will argue that including evolution of error correlations leads to stronger internal consistency of a data assimilation ,

  11. Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game

    NASA Astrophysics Data System (ADS)

    Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng

    2017-12-01

    It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

  12. Constructing statistically unbiased cortical surface templates using feature-space covariance

    NASA Astrophysics Data System (ADS)

    Parvathaneni, Prasanna; Lyu, Ilwoo; Huo, Yuankai; Blaber, Justin; Hainline, Allison E.; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.

    2018-03-01

    The choice of surface template plays an important role in cross-sectional subject analyses involving cortical brain surfaces because there is a tendency toward registration bias given variations in inter-individual and inter-group sulcal and gyral patterns. In order to account for the bias and spatial smoothing, we propose a feature-based unbiased average template surface. In contrast to prior approaches, we factor in the sample population covariance and assign weights based on feature information to minimize the influence of covariance in the sampled population. The mean surface is computed by applying the weights obtained from an inverse covariance matrix, which guarantees that multiple representations from similar groups (e.g., involving imaging, demographic, diagnosis information) are down-weighted to yield an unbiased mean in feature space. Results are validated by applying this approach in two different applications. For evaluation, the proposed unbiased weighted surface mean is compared with un-weighted means both qualitatively and quantitatively (mean squared error and absolute relative distance of both the means with baseline). In first application, we validated the stability of the proposed optimal mean on a scan-rescan reproducibility dataset by incrementally adding duplicate subjects. In the second application, we used clinical research data to evaluate the difference between the weighted and unweighted mean when different number of subjects were included in control versus schizophrenia groups. In both cases, the proposed method achieved greater stability that indicated reduced impacts of sampling bias. The weighted mean is built based on covariance information in feature space as opposed to spatial location, thus making this a generic approach to be applicable to any feature of interest.

  13. An alternative covariance estimator to investigate genetic heterogeneity in populations.

    PubMed

    Heslot, Nicolas; Jannink, Jean-Luc

    2015-11-26

    For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative

  14. Covariance of fluid-turbulence theory.

    PubMed

    Ariki, Taketo

    2015-05-01

    Covariance of physical quantities in fluid-turbulence theory and their governing equations under generalized coordinate transformation is discussed. It is shown that the velocity fluctuation and its governing law have a covariance under far wider group of coordinate transformation than that of conventional Euclidean invariance, and, as a natural consequence, various correlations and their governing laws are shown to be formulated in covariant manners under this wider transformation group. In addition, it is also shown that the covariance of the Reynolds stress is tightly connected to the objectivity of the mean flow.

  15. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  16. Covariant Uniform Acceleration

    NASA Astrophysics Data System (ADS)

    Friedman, Yaakov; Scarr, Tzvi

    2013-04-01

    We derive a 4D covariant Relativistic Dynamics Equation. This equation canonically extends the 3D relativistic dynamics equation , where F is the 3D force and p = m0γv is the 3D relativistic momentum. The standard 4D equation is only partially covariant. To achieve full Lorentz covariance, we replace the four-force F by a rank 2 antisymmetric tensor acting on the four-velocity. By taking this tensor to be constant, we obtain a covariant definition of uniformly accelerated motion. This solves a problem of Einstein and Planck. We compute explicit solutions for uniformly accelerated motion. The solutions are divided into four Lorentz-invariant types: null, linear, rotational, and general. For null acceleration, the worldline is cubic in the time. Linear acceleration covariantly extends 1D hyperbolic motion, while rotational acceleration covariantly extends pure rotational motion. We use Generalized Fermi-Walker transport to construct a uniformly accelerated family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the Weak Hypothesis of Locality, we obtain local spacetime transformations from a uniformly accelerated frame K' to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. We obtain velocity and acceleration transformations from a uniformly accelerated system K' to an inertial frame K. We introduce the 4D velocity, an adaptation of Horwitz and Piron s notion of "off-shell." We derive the general formula for the time dilation between accelerated clocks. We obtain a formula for the angular velocity of a uniformly accelerated object. Every rest point of K' is uniformly accelerated, and

  17. Do current cosmological observations rule out all covariant Galileons?

    NASA Astrophysics Data System (ADS)

    Peirone, Simone; Frusciante, Noemi; Hu, Bin; Raveri, Marco; Silvestri, Alessandra

    2018-03-01

    We revisit the cosmology of covariant Galileon gravity in view of the most recent cosmological data sets, including weak lensing. As a higher derivative theory, covariant Galileon models do not have a Λ CDM limit and predict a very different structure formation pattern compared with the standard Λ CDM scenario. Previous cosmological analyses suggest that this model is marginally disfavored, yet cannot be completely ruled out. In this work we use a more recent and extended combination of data, and we allow for more freedom in the cosmology, by including a massive neutrino sector with three different mass hierarchies. We use the Planck measurements of cosmic microwave background temperature and polarization; baryonic acoustic oscillations measurements by BOSS DR12; local measurements of H0; the joint light-curve analysis supernovae sample; and, for the first time, weak gravitational lensing from the KiDS Collaboration. We find, that in order to provide a reasonable fit, a nonzero neutrino mass is indeed necessary, but we do not report any sizable difference among the three neutrino hierarchies. Finally, the comparison of the Bayesian evidence to the Λ CDM one shows that in all the cases considered, covariant Galileon models are statistically ruled out by cosmological data.

  18. Covariance Manipulation for Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Use of probability of collision (Pc) has brought sophistication to CA. Made possible by JSpOC precision catalogue because provides covariance. Has essentially replaced miss distance as basic CA parameter. Embrace of Pc has elevated methods to 'manipulate' covariance to enable/improve CA calculations. Two such methods to be examined here; compensation for absent or unreliable covariances through 'Maximum Pc' calculation constructs, projection (not propagation) of epoch covariances forward in time to try to enable better risk assessments. Two questions to be answered about each; situations to which such approaches are properly applicable, amount of utility that such methods offer.

  19. Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?

    PubMed Central

    Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive

  20. Neural networks: further insights into error function, generalized weights and others

    PubMed Central

    2016-01-01

    The article is a continuum of a previous one providing further insights into the structure of neural network (NN). Key concepts of NN including activation function, error function, learning rate and generalized weights are introduced. NN topology can be visualized with generic plot() function by passing a “nn” class object. Generalized weights assist interpretation of NN model with respect to the independent effect of individual input variables. A large variance of generalized weights for a covariate indicates non-linearity of its independent effect. If generalized weights of a covariate are approximately zero, the covariate is considered to have no effect on outcome. Finally, prediction of new observations can be performed using compute() function. Make sure that the feature variables passed to the compute() function are in the same order to that in the training NN. PMID:27668220

  1. Evaluation and error apportionment of an ensemble of ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact

  2. Chemical Plume Detection with an Iterative Background Estimation Technique

    DTIC Science & Technology

    2016-05-17

    schemes because of contamination of background statistics by the plume. To mitigate the effects of plume contamination , a first pass of the detector...can be used to create a background mask. However, large diffuse plumes are typically not removed by a single pass. Instead, contamination can be...is estimated using plume-pixels, the covariance matrix is contaminated and detection performance may be significantly reduced. To avoid Further author

  3. Structural covariance in the hallucinating brain: a voxel-based morphometry study

    PubMed Central

    Modinos, Gemma; Vercammen, Ans; Mechelli, Andrea; Knegtering, Henderikus; McGuire, Philip K.; Aleman, André

    2009-01-01

    Background Neuroimaging studies have indicated that a number of cortical regions express altered patterns of structural covariance in schizophrenia. The relation between these alterations and specific psychotic symptoms is yet to be investigated. We used voxel-based morphometry to examine regional grey matter volumes and structural covariance associated with severity of auditory verbal hallucinations. Methods We applied optimized voxel-based morphometry to volumetric magnetic resonance imaging data from 26 patients with medication-resistant auditory verbal hallucinations (AVHs); statistical inferences were made at p < 0.05 after correction for multiple comparisons. Results Grey matter volume in the left inferior frontal gyrus was positively correlated with severity of AVHs. Hallucination severity influenced the pattern of structural covariance between this region and the left superior/middle temporal gyri, the right inferior frontal gyrus and hippocampus, and the insula bilaterally. Limitations The results are based on self-reported severity of auditory hallucinations. Complementing with a clinician-based instrument could have made the findings more compelling. Future studies would benefit from including a measure to control for other symptoms that may covary with AVHs and for the effects of antipsychotic medication. Conclusion The results revealed that overall severity of AVHs modulated cortical intercorrelations between frontotemporal regions involved in language production and verbal monitoring, supporting the critical role of this network in the pathophysiology of hallucinations. PMID:19949723

  4. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  5. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    PubMed

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.

  6. Application of seemingly unrelated regression in medical data with intermittently observed time-dependent covariates.

    PubMed

    Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam

    2012-01-01

    BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.

  7. Covariance Partition Priors: A Bayesian Approach to Simultaneous Covariance Estimation for Longitudinal Data.

    PubMed

    Gaskins, J T; Daniels, M J

    2016-01-02

    The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.

  8. Spatial-temporal-covariance-based modeling, analysis, and simulation of aero-optics wavefront aberrations.

    PubMed

    Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J

    2014-07-01

    We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.

  9. A cautionary note on the use of the Analysis of Covariance (ANCOVA) in classification designs with and without within-subject factors

    PubMed Central

    Schneider, Bruce A.; Avivi-Reich, Meital; Mozuraitis, Mindaugas

    2015-01-01

    A number of statistical textbooks recommend using an analysis of covariance (ANCOVA) to control for the effects of extraneous factors that might influence the dependent measure of interest. However, it is not generally recognized that serious problems of interpretation can arise when the design contains comparisons of participants sampled from different populations (classification designs). Designs that include a comparison of younger and older adults, or a comparison of musicians and non-musicians are examples of classification designs. In such cases, estimates of differences among groups can be contaminated by differences in the covariate population means across groups. A second problem of interpretation will arise if the experimenter fails to center the covariate measures (subtracting the mean covariate score from each covariate score) whenever the design contains within-subject factors. Unless the covariate measures on the participants are centered, estimates of within-subject factors are distorted, and significant increases in Type I error rates, and/or losses in power can occur when evaluating the effects of within-subject factors. This paper: (1) alerts potential users of ANCOVA of the need to center the covariate measures when the design contains within-subject factors, and (2) indicates how they can avoid biases when one cannot assume that the expected value of the covariate measure is the same for all of the groups in a classification design. PMID:25954230

  10. Covariation of Peptide Abundances Accurately Reflects Protein Concentration Differences*

    PubMed Central

    Pirmoradian, Mohammad

    2017-01-01

    Most implementations of mass spectrometry-based proteomics involve enzymatic digestion of proteins, expanding the analysis to multiple proteolytic peptides for each protein. Currently, there is no consensus of how to summarize peptides' abundances to protein concentrations, and such efforts are complicated by the fact that error control normally is applied to the identification process, and do not directly control errors linking peptide abundance measures to protein concentration. Peptides resulting from suboptimal digestion or being partially modified are not representative of the protein concentration. Without a mechanism to remove such unrepresentative peptides, their abundance adversely impacts the estimation of their protein's concentration. Here, we present a relative quantification approach, Diffacto, that applies factor analysis to extract the covariation of peptides' abundances. The method enables a weighted geometrical average summarization and automatic elimination of incoherent peptides. We demonstrate, based on a set of controlled label-free experiments using standard mixtures of proteins, that the covariation structure extracted by the factor analysis accurately reflects protein concentrations. In the 1% peptide-spectrum match-level FDR data set, as many as 11% of the peptides have abundance differences incoherent with the other peptides attributed to the same protein. If not controlled, such contradicting peptide abundance have a severe impact on protein quantifications. When adding the quantities of each protein's three most abundant peptides, we note as many as 14% of the proteins being estimated as having a negative correlation with their actual concentration differences between samples. Diffacto reduced the amount of such obviously incorrectly quantified proteins to 1.6%. Furthermore, by analyzing clinical data sets from two breast cancer studies, our method revealed the persistent proteomic signatures linked to three subtypes of breast cancer

  11. Covariant diagrams for one-loop matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengkang

    Here, we present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed "covariant diagrams." The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We also show how such derivation canmore » be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.« less

  12. Covariant diagrams for one-loop matching

    DOE PAGES

    Zhang, Zhengkang

    2017-05-30

    Here, we present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed "covariant diagrams." The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We also show how such derivation canmore » be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.« less

  13. Hamiltonian approach to GR - Part 1: covariant theory of classical gravity

    NASA Astrophysics Data System (ADS)

    Cremaschini, Claudio; Tessarotto, Massimo

    2017-05-01

    A challenging issue in General Relativity concerns the determination of the manifestly covariant continuum Hamiltonian structure underlying the Einstein field equations and the related formulation of the corresponding covariant Hamilton-Jacobi theory. The task is achieved by adopting a synchronous variational principle requiring distinction between the prescribed deterministic metric tensor \\widehat{g}(r)≡ { \\widehat{g}_{μ ν }(r)} solution of the Einstein field equations which determines the geometry of the background space-time and suitable variational fields x≡ { g,π } obeying an appropriate set of continuum Hamilton equations, referred to here as GR-Hamilton equations. It is shown that a prerequisite for reaching such a goal is that of casting the same equations in evolutionary form by means of a Lagrangian parametrization for a suitably reduced canonical state. As a result, the corresponding Hamilton-Jacobi theory is established in manifestly covariant form. Physical implications of the theory are discussed. These include the investigation of the structural stability of the GR-Hamilton equations with respect to vacuum solutions of the Einstein equations, assuming that wave-like perturbations are governed by the canonical evolution equations.

  14. On-Line Identification of Simulation Examples for Forgetting Methods to Track Time Varying Parameters Using the Alternative Covariance Matrix in Matlab

    NASA Astrophysics Data System (ADS)

    Vachálek, Ján

    2011-12-01

    The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.

  15. Massive graviton on arbitrary background: derivation, syzygies, applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, Laura; Deffayet, Cédric; IHES, Institut des Hautes Études Scientifiques,Le Bois-Marie, 35 route de Chartres, F-91440 Bures-sur-Yvette

    2015-06-23

    We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a “reference metric' which is present in the non perturbative formulation. Wemore » show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.« less

  16. Massive graviton on arbitrary background: derivation, syzygies, applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, Laura; Deffayet, Cédric; Strauss, Mikael von, E-mail: bernard@iap.fr, E-mail: deffayet@iap.fr, E-mail: strauss@iap.fr

    2015-06-01

    We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a ''reference metric' which is present in the non perturbative formulation. Wemore » show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.« less

  17. Construction of Covariance Functions with Variable Length Fields

    NASA Technical Reports Server (NTRS)

    Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven

    2005-01-01

    This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.

  18. Space-Time Modelling of Groundwater Level Using Spartan Covariance Function

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil; Hristopulos, Dionissios

    2014-05-01

    groundwater level increase during the wet period of 2003-2004 and a considerable drop during the dry period of 2005-2006. Both periods are associated with significant annual changes in the precipitation compared to the basin average, i.e., a 40% increase and 65% decrease, respectively. We use STRK to 'predict' the groundwater level for the two selected hydrological periods (wet period of 2003-2004 and dry period of 2005-2006) at each sampling station. The predictions are validated using the respective measured values. The novel Spartan spatiotemporal covariance function gives a mean absolute relative prediction error of 12%. This is 45% lower than the respective value obtained with the commonly used product-sum covariance function, and 31% lower than the respective value obtained with a non-separable function based on the diffusion equation (Kolovos et al. 2010). The advantage of the Spartan space-time covariance model is confirmed with statistical measures such as the root mean square standardized error (RMSSE), the modified coefficient of model efficiency, E' (Legates and McCabe, 1999) and the modified Index of Agreement, IoA'(Janssen and Heuberger, 1995). Hristopulos, D. T. and Elogne, S. N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs random fields. IEEE Transactions on Information Theory, 53, 4667-4467. Janssen, P.H.M. and Heuberger P.S.C. 1995. Calibration of process-oriented models. Ecological Modelling, 83, 55-66. Kolovos, A., Christakos, G., Hristopulos, D. T. and Serre, M. L. 2004. Methods for generating non-separable spatiotemporal covariance models with potential environmental applications. Advances in Water Resources, 27 (8), 815-830. Legates, D.R. and McCabe Jr., G.J. 1999. Evaluating the use of 'goodness-of-fit' measures in hydrologic and hydro climatic model validation. Water Resources Research, 35, 233-241. Varouchakis, E. A. and Hristopulos, D. T. 2013. Improvement of groundwater level prediction in sparsely gauged

  19. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  20. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  1. Impact of Flow-Dependent Error Correlations and Tropospheric Chemistry on Assimilated Ozone

    NASA Technical Reports Server (NTRS)

    Wargan, K.; Stajner, I.; Hayashi, H.; Pawson, S.; Jones, D. B. A.

    2003-01-01

    The presentation compares different versions of a global three-dimensional ozone data assimilation system developed at NASA's Data Assimilation Office. The Solar Backscatter Ultraviolet/2 (SBUV/2) total and partial ozone column retrievals are the sole data assimilated in all of the experiments presented. We study the impact of changing the forecast error covariance model from a version assuming static correlations with a one that captures a short-term Lagrangian evolution of those correlations. This is further combined with a study of the impact of neglecting the tropospheric ozone production, loss and dry deposition rates, which are obtained from the Harvard GEOS-CHEM model. We compare statistical characteristics of the assimilated data and the results of validation against independent observations, obtained from WMO balloon-borne sondes and the Polar Ozone and Aerosol Measurement (POAM) III instrument. Experiments show that allowing forecast error correlations to evolve with the flow results in positive impact on assimilated ozone within the regions where data were not assimilated, particularly at high latitudes in both hemispheres. On the other hand, the main sensitivity to tropospheric chemistry is in the Tropics and sub-Tropics. The best agreement between the assimilated ozone and the in-situ sonde data is in the experiment using both flow-dependent error covariances and tropospheric chemistry.

  2. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2009-03-01

    enough data was collected to have any statistical significance or determine impact on latent error in the process of blood transfusion. Bedside...of adverse drug events. JAMA 1995; 274: 35-43 . Leape, L.L., Brennan, T .A., & Laird, N .M. ( 1991) The nature of adverse events in hospitalized...Background Medical errors are a significant cause of morbidity and mortality among hospitalized patients (Kohn, Corrigan and Donaldson, 2000; Leape, Brennan

  3. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although

  4. Cosmology from Cosmic Microwave Background and large- scale structure

    NASA Astrophysics Data System (ADS)

    Xu, Yongzhong

    2003-10-01

    This dissertation consists of a series of studies, constituting four published papers, involving the Cosmic Microwave Background and the large scale structure, which help constrain Cosmological parameters and potential systematic errors. First, we present a method for comparing and combining maps with different resolutions and beam shapes, and apply it to the Saskatoon, QMAP and COBE/DMR data sets. Although the Saskatoon and QMAP maps detect signal at the 21σ and 40σ, levels, respectively, their difference is consistent with pure noise, placing strong limits on possible systematic errors. In particular, we obtain quantitative upper limits on relative calibration and pointing errors. Splitting the combined data by frequency shows similar consistency between the Ka- and Q-bands, placing limits on foreground contamination. The visual agreement between the maps is equally striking. Our combined QMAP+Saskatoon map, nicknamed QMASK, is publicly available at www.hep.upenn.edu/˜xuyz/qmask.html together with its 6495 x 6495 noise covariance matrix. This thoroughly tested data set covers a large enough area (648 square degrees—at the time, the largest degree-scale map available) to allow a statistical comparison with LOBE/DMR, showing good agreement. By band-pass-filtering the QMAP and Saskatoon maps, we are also able to spatially compare them scale-by-scale to check for beam- and pointing-related systematic errors. Using the QMASK map, we then measure the cosmic microwave background (CMB) power spectrum on angular scales ℓ ˜ 30 200 (1° 6°), and we test it for non-Gaussianity using morphological statistics known as Minkowski functionals. We conclude that the QMASK map is neither a very typical nor a very exceptional realization of a Gaussian random field. At least about 20% of the 1000 Gaussian Monte Carlo maps differ more than the QMASK map from the mean morphological parameters of the Gaussian fields. Finally, we compute the real-space power spectrum and the

  5. Covariance descriptor fusion for target detection

    NASA Astrophysics Data System (ADS)

    Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih

    2016-05-01

    Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.

  6. Improving Factor Score Estimation Through the Use of Observed Background Characteristics

    PubMed Central

    Curran, Patrick J.; Cole, Veronica; Bauer, Daniel J.; Hussong, Andrea M.; Gottfredson, Nisha

    2016-01-01

    A challenge facing nearly all studies in the psychological sciences is how to best combine multiple items into a valid and reliable score to be used in subsequent modelling. The most ubiquitous method is to compute a mean of items, but more contemporary approaches use various forms of latent score estimation. Regardless of approach, outside of large-scale testing applications, scoring models rarely include background characteristics to improve score quality. The current paper used a Monte Carlo simulation design to study score quality for different psychometric models that did and did not include covariates across levels of sample size, number of items, and degree of measurement invariance. The inclusion of covariates improved score quality for nearly all design factors, and in no case did the covariates degrade score quality relative to not considering the influences at all. Results suggest that the inclusion of observed covariates can improve factor score estimation. PMID:28757790

  7. Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume

    2013-01-01

    Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.

  8. Testing the equivalence of modern human cranial covariance structure: Implications for bioarchaeological applications.

    PubMed

    von Cramon-Taubadel, Noreen; Schroeder, Lauren

    2016-10-01

    Estimation of the variance-covariance (V/CV) structure of fragmentary bioarchaeological populations requires the use of proxy extant V/CV parameters. However, it is currently unclear whether extant human populations exhibit equivalent V/CV structures. Random skewers (RS) and hierarchical analyses of common principal components (CPC) were applied to a modern human cranial dataset. Cranial V/CV similarity was assessed globally for samples of individual populations (jackknifed method) and for pairwise population sample contrasts. The results were examined in light of potential explanatory factors for covariance difference, such as geographic region, among-group distance, and sample size. RS analyses showed that population samples exhibited highly correlated multivariate responses to selection, and that differences in RS results were primarily a consequence of differences in sample size. The CPC method yielded mixed results, depending upon the statistical criterion used to evaluate the hierarchy. The hypothesis-testing (step-up) approach was deemed problematic due to sensitivity to low statistical power and elevated Type I errors. In contrast, the model-fitting (lowest AIC) approach suggested that V/CV matrices were proportional and/or shared a large number of CPCs. Pairwise population sample CPC results were correlated with cranial distance, suggesting that population history explains some of the variability in V/CV structure among groups. The results indicate that patterns of covariance in human craniometric samples are broadly similar but not identical. These findings have important implications for choosing extant covariance matrices to use as proxy V/CV parameters in evolutionary analyses of past populations. © 2016 Wiley Periodicals, Inc.

  9. Further Evaluation of Covariate Analysis using Empirical Bayes Estimates in Population Pharmacokinetics: the Perception of Shrinkage and Likelihood Ratio Test.

    PubMed

    Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose

    2017-01-01

    Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.

  10. Computation of transform domain covariance matrices

    NASA Technical Reports Server (NTRS)

    Fino, B. J.; Algazi, V. R.

    1975-01-01

    It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.

  11. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    PubMed

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  12. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    PubMed

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  13. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  14. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    PubMed

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Covariate analysis of bivariate survival data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less

  16. A study on characteristics of retrospective optimal interpolation with WRF testbed

    NASA Astrophysics Data System (ADS)

    Kim, S.; Noh, N.; Lim, G.

    2012-12-01

    This study presents the application of retrospective optimal interpolation (ROI) with Weather Research and Forecasting model (WRF). Song et al. (2009) suggest ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. Song and Lim (2011) improve the method by incorporating eigen-decomposition and covariance inflation. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In this study, ROI method is applied to WRF model to validate the algorithm and to investigate the capability. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance. Using the background error covariance in eigen-space, 1-profile assimilation experiment is performed. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation. The characteristics and strength/weakness of ROI method are investigated by conducting the experiments with other data assimilation method.

  17. Improvement of structural models using covariance analysis and nonlinear generalized least squares

    NASA Technical Reports Server (NTRS)

    Glaser, R. J.; Kuo, C. P.; Wada, B. K.

    1992-01-01

    The next generation of large, flexible space structures will be too light to support their own weight, requiring a system of structural supports for ground testing. The authors have proposed multiple boundary-condition testing (MBCT), using more than one support condition to reduce uncertainties associated with the supports. MBCT would revise the mass and stiffness matrix, analytically qualifying the structure for operation in space. The same procedure is applicable to other common test conditions, such as empty/loaded tanks and subsystem/system level tests. This paper examines three techniques for constructing the covariance matrix required by nonlinear generalized least squares (NGLS) to update structural models based on modal test data. The methods range from a complicated approach used to generate the simulation data (i.e., the correct answer) to a diagonal matrix based on only two constants. The results show that NGLS is very insensitive to assumptions about the covariance matrix, suggesting that a workable NGLS procedure is possible. The examples also indicate that the multiple boundary condition procedure more accurately reduces errors than individual boundary condition tests alone.

  18. Lorentz covariance of loop quantum gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rovelli, Carlo; Speziale, Simone

    2011-05-15

    The kinematics of loop gravity can be given a manifestly Lorentz-covariant formulation: the conventional SU(2)-spin-network Hilbert space can be mapped to a space K of SL(2,C) functions, where Lorentz covariance is manifest. K can be described in terms of a certain subset of the projected spin networks studied by Livine, Alexandrov and Dupuis. It is formed by SL(2,C) functions completely determined by their restriction on SU(2). These are square-integrable in the SU(2) scalar product, but not in the SL(2,C) one. Thus, SU(2)-spin-network states can be represented by Lorentz-covariant SL(2,C) functions, as two-component photons can be described in the Lorentz-covariant Gupta-Bleulermore » formalism. As shown by Wolfgang Wieland in a related paper, this manifestly Lorentz-covariant formulation can also be directly obtained from canonical quantization. We show that the spinfoam dynamics of loop quantum gravity is locally SL(2,C)-invariant in the bulk, and yields states that are precisely in K on the boundary. This clarifies how the SL(2,C) spinfoam formalism yields an SU(2) theory on the boundary. These structures define a tidy Lorentz-covariant formalism for loop gravity.« less

  19. Levy Matrices and Financial Covariances

    NASA Astrophysics Data System (ADS)

    Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail

    2003-10-01

    In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.

  20. Structural Analysis of Covariance and Correlation Matrices.

    ERIC Educational Resources Information Center

    Joreskog, Karl G.

    1978-01-01

    A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…

  1. Large-region acoustic source mapping using a movable array and sparse covariance fitting.

    PubMed

    Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L

    2017-01-01

    Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].

  2. A New Approach to Extract Forest Water Use Efficiency from Eddy Covariance Data

    NASA Astrophysics Data System (ADS)

    Scanlon, T. M.; Sulman, B. N.

    2016-12-01

    Determination of forest water use efficiency (WUE) from eddy covariance data typically involves the following steps: (a) estimating gross primary productivity (GPP) from direct measurements of net ecosystem exchange (NEE) by extrapolating nighttime ecosystem respiration (ER) to daytime conditions, and (b) assuming direct evaporation (E) is minimal several days after rainfall, meaning that direct measurements of evapotranspiration (ET) are identical to transpiration (T). Both of these steps could lead to errors in the estimation of forest WUE. Here, we present a theoretical approach for estimating WUE through the analysis of standard eddy covariance data, which circumvents these steps. Only five statistics are needed from the high-frequency time series to extract WUE: CO2 flux, water vapor flux, standard deviation in CO2 concentration, standard deviation in water vapor concentration, and the correlation coefficient between CO2 and water vapor concentration for each half-hour period. The approach is based on the assumption that stomatal fluxes (i.e. photosynthesis and transpiration) lead to perfectly negative correlations and non-stomatal fluxes (i.e. ecosystem respiration and direct evaporation) lead to perfectly positive correlations within the CO2 and water vapor high frequency time series measured above forest canopies. A mathematical framework is presented, followed by a proof of concept using eddy covariance data and leaf-level measurements of WUE.

  3. Dangers in Using Analysis of Covariance Procedures.

    ERIC Educational Resources Information Center

    Campbell, Kathleen T.

    Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…

  4. On the Possibility of Ill-Conditioned Covariance Matrices in the First-Order Two-Step Estimator

    NASA Technical Reports Server (NTRS)

    Garrison, James L.; Axelrod, Penina; Kasdin, N. Jeremy

    1997-01-01

    The first-order two-step nonlinear estimator, when applied to a problem of orbital navigation, is found to occasionally produce first step covariance matrices with very low eigenvalues at certain trajectory points. This anomaly is the result of the linear approximation to the first step covariance propagation. The study of this anomaly begins with expressing the propagation of the first and second step covariance matrices in terms of a single matrix. This matrix is shown to have a rank equal to the difference between the number of first step states and the number of second step states. Furthermore, under some simplifying assumptions, it is found that the basis of the column space of this matrix remains fixed once the filter has removed the large initial state error. A test matrix containing the basis of this column space and the partial derivative matrix relating first and second step states is derived. This square test matrix, which has dimensions equal to the number of first step states, numerically drops rank at the same locations that the first step covariance does. It is formulated in terms of a set of constant vectors (the basis) and a matrix which can be computed from a reference trajectory (the partial derivative matrix). A simple example problem involving dynamics which are described by two states and a range measurement illustrate the cause of this anomaly and the application of the aforementioned numerical test in more detail.

  5. Three Cs in Measurement Models: Causal Indicators, Composite Indicators, and Covariates

    PubMed Central

    Bollen, Kenneth A.; Bauldry, Shawn

    2013-01-01

    In the last two decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that we can classify indicators into two categories, effect (reflective) indicators and causal (formative) indicators. This paper argues that the dichotomous view is too simple. Instead, there are effect indicators and three types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the “three Cs”). Causal indicators have conceptual unity and their effects on latent variables are structural. Covariates are not concept measures, but are variables to control to avoid bias in estimating the relations between measures and latent variable(s). Composite (formative) indicators form exact linear combinations of variables that need not share a concept. Their coefficients are weights rather than structural effects and composites are a matter of convenience. The failure to distinguish the “three Cs” has led to confusion and questions such as: are causal and formative indicators different names for the same indicator type? Should an equation with causal or formative indicators have an error term? Are the coefficients of causal indicators less stable than effect indicators? Distinguishing between causal and composite indicators and covariates goes a long way toward eliminating this confusion. We emphasize the key role that subject matter expertise plays in making these distinctions. We provide new guidelines for working with these variable types, including identification of models, scaling latent variables, parameter estimation, and validity assessment. A running empirical example on self-perceived health illustrates our major points. PMID:21767021

  6. Three Cs in measurement models: causal indicators, composite indicators, and covariates.

    PubMed

    Bollen, Kenneth A; Bauldry, Shawn

    2011-09-01

    In the last 2 decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that one can classify indicators into 2 categories: effect (reflective) indicators and causal (formative) indicators. We argue that the dichotomous view is too simple. Instead, there are effect indicators and 3 types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the "Three Cs"). Causal indicators have conceptual unity, and their effects on latent variables are structural. Covariates are not concept measures, but are variables to control to avoid bias in estimating the relations between measures and latent variables. Composite (formative) indicators form exact linear combinations of variables that need not share a concept. Their coefficients are weights rather than structural effects, and composites are a matter of convenience. The failure to distinguish the Three Cs has led to confusion and questions, such as, Are causal and formative indicators different names for the same indicator type? Should an equation with causal or formative indicators have an error term? Are the coefficients of causal indicators less stable than effect indicators? Distinguishing between causal and composite indicators and covariates goes a long way toward eliminating this confusion. We emphasize the key role that subject matter expertise plays in making these distinctions. We provide new guidelines for working with these variable types, including identification of models, scaling latent variables, parameter estimation, and validity assessment. A running empirical example on self-perceived health illustrates our major points.

  7. Covariant harmonic oscillators: 1973 revisited

    NASA Technical Reports Server (NTRS)

    Noz, M. E.

    1993-01-01

    Using the relativistic harmonic oscillator, a physical basis is given to the phenomenological wave function of Yukawa which is covariant and normalizable. It is shown that this wave function can be interpreted in terms of the unitary irreducible representations of the Poincare group. The transformation properties of these covariant wave functions are also demonstrated.

  8. Smooth individual level covariates adjustment in disease mapping.

    PubMed

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-05-01

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Nonrelativistic trace and diffeomorphism anomalies in particle number background

    NASA Astrophysics Data System (ADS)

    Auzzi, Roberto; Baiguera, Stefano; Nardelli, Giuseppe

    2018-04-01

    Using the heat kernel method, we compute nonrelativistic trace anomalies for Schrödinger theories in flat spacetime, with a generic background gauge field for the particle number symmetry, both for a free scalar and a free fermion. The result is genuinely nonrelativistic, and it has no counterpart in the relativistic case. Contrary to naive expectations, the anomaly is not gauge invariant; this is similar to the nongauge covariance of the non-Abelian relativistic anomaly. We also show that, in the same background, the gravitational anomaly for a nonrelativistic scalar vanishes.

  10. A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data

    NASA Astrophysics Data System (ADS)

    Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana

    2016-09-01

    A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water

  11. Statistical learning from nonrecurrent experience with discrete input variables and recursive-error-minimization equations

    NASA Astrophysics Data System (ADS)

    Carter, Jeffrey R.; Simon, Wayne E.

    1990-08-01

    Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and

  12. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    PubMed

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Covariance hypotheses for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Decell, H. P.; Peters, C.

    1983-01-01

    Two covariance hypotheses are considered for LANDSAT data acquired by sampling fields, one an autoregressive covariance structure and the other the hypothesis of exchangeability. A minimum entropy approximation of the first structure by the second is derived and shown to have desirable properties for incorporation into a mixture density estimation procedure. Results of a rough test of the exchangeability hypothesis are presented.

  14. A Formalism for Covariant Polarized Radiative Transport by Ray Tracing

    NASA Astrophysics Data System (ADS)

    Gammie, Charles F.; Leung, Po Kin

    2012-06-01

    We write down a covariant formalism for polarized radiative transfer appropriate for ray tracing through a turbulent plasma. The polarized radiation field is represented by the polarization tensor (coherency matrix) N αβ ≡ langa α k a*β k rang, where ak is a Fourier coefficient for the vector potential. Using Maxwell's equations, the Liouville-Vlasov equation, and the WKB approximation, we show that the transport equation in vacuo is k μ∇μ N αβ = 0. We show that this is equivalent to Broderick & Blandford's formalism based on invariant Stokes parameters and a rotation coefficient, and suggest a modification that may reduce truncation error in some situations. Finally, we write down several alternative approaches to integrating the transfer equation.

  15. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    NASA Astrophysics Data System (ADS)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  16. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    PubMed

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    NASA Astrophysics Data System (ADS)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and

  18. A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes

    NASA Technical Reports Server (NTRS)

    Carpenter, Russell; Lee, Taesul

    2008-01-01

    Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.

  19. Observations of geographically correlated orbit errors for TOPEX/Poseidon using the global positioning system

    NASA Technical Reports Server (NTRS)

    Christensen, E. J.; Haines, B. J.; Mccoll, K. C.; Nerem, R. S.

    1994-01-01

    We have compared Global Positioning System (GPS)-based dynamic and reduced-dynamic TOPEX/Poseidon orbits over three 10-day repeat cycles of the ground-track. The results suggest that the prelaunch joint gravity model (JGM-1) introduces geographically correlated errors (GCEs) which have a strong meridional dependence. The global distribution and magnitude of these GCEs are consistent with a prelaunch covariance analysis, with estimated and predicted global rms error statistics of 2.3 and 2.4 cm rms, respectively. Repeating the analysis with the post-launch joint gravity model (JGM-2) suggests that a portion of the meridional dependence observed in JGM-1 still remains, with global rms error of 1.2 cm.

  20. Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    NASA Astrophysics Data System (ADS)

    Friedrich, Oliver; Eifler, Tim

    2018-01-01

    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.

  1. Covariates of intravenous paracetamol pharmacokinetics in adults

    PubMed Central

    2014-01-01

    Background Pharmacokinetic estimates for intravenous paracetamol in individual adult cohorts are different to a certain extent, and understanding the covariates of these differences may guide dose individualization. In order to assess covariate effects of intravenous paracetamol disposition in adults, pharmacokinetic data on discrete studies were pooled. Methods This pooled analysis was based on 7 studies, resulting in 2755 time-concentration observations in 189 adults (mean age 46 SD 23 years; weight 73 SD 13 kg) given intravenous paracetamol. The effects of size, age, pregnancy and other clinical settings (intensive care, high dependency, orthopaedic or abdominal surgery) on clearance and volume of distribution were explored using non-linear mixed effects models. Results Paracetamol disposition was best described using normal fat mass (NFM) with allometric scaling as a size descriptor. A three-compartment linear disposition model revealed that the population parameter estimates (between subject variability,%) were central volume (V1) 24.6 (55.5%) L/70 kg with peripheral volumes of distribution V2 23.1 (49.6%) L/70 kg and V3 30.6 (78.9%) L/70 kg. Clearance (CL) was 16.7 (24.6%) L/h/70 kg and inter-compartment clearances were Q2 67.3 (25.7%) L/h/70 kg and Q3 2.04 (71.3%) L/h/70 kg. Clearance and V2 decreased only slightly with age. Sex differences in clearance were minor and of no significance. Clearance, relative to median values, was increased during pregnancy (FPREG = 1.14) and decreased during abdominal surgery (FABDCL = 0.715). Patients undergoing orthopaedic surgery had a reduced V2 (FORTHOV = 0.649), while those in intensive care had increased V2 (FICV = 1.51). Conclusions Size and age are important covariates for paracetamol pharmacokinetics explaining approximately 40% of clearance and V2 variability. Dose individualization in adult subpopulations would achieve little benefit in the scenarios explored. PMID:25342929

  2. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    PubMed

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical, inver-covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fMRI, MEG and EEG data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in MEG beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  3. The search for causal inferences: using propensity scores post hoc to reduce estimation error with nonexperimental research.

    PubMed

    Tumlinson, Samuel E; Sass, Daniel A; Cano, Stephanie M

    2014-03-01

    While experimental designs are regarded as the gold standard for establishing causal relationships, such designs are usually impractical owing to common methodological limitations. The objective of this article is to illustrate how propensity score matching (PSM) and using propensity scores (PS) as a covariate are viable alternatives to reduce estimation error when experimental designs cannot be implemented. To mimic common pediatric research practices, data from 140 simulated participants were used to resemble an experimental and nonexperimental design that assessed the effect of treatment status on participant weight loss for diabetes. Pretreatment participant characteristics (age, gender, physical activity, etc.) were then used to generate PS for use in the various statistical approaches. Results demonstrate how PSM and using the PS as a covariate can be used to reduce estimation error and improve statistical inferences. References for issues related to the implementation of these procedures are provided to assist researchers.

  4. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  5. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  6. Form of the manifestly covariant Lagrangian

    NASA Astrophysics Data System (ADS)

    Johns, Oliver Davis

    1985-10-01

    The preferred form for the manifestly covariant Lagrangian function of a single, charged particle in a given electromagnetic field is the subject of some disagreement in the textbooks. Some authors use a ``homogeneous'' Lagrangian and others use a ``modified'' form in which the covariant Hamiltonian function is made to be nonzero. We argue in favor of the ``homogeneous'' form. We show that the covariant Lagrangian theories can be understood only if one is careful to distinguish quantities evaluated on the varied (in the sense of the calculus of variations) world lines from quantities evaluated on the unvaried world lines. By making this distinction, we are able to derive the Hamilton-Jacobi and Klein-Gordon equations from the ``homogeneous'' Lagrangian, even though the covariant Hamiltonian function is identically zero on all world lines. The derivation of the Klein-Gordon equation in particular gives Lagrangian theoretical support to the derivations found in standard quantum texts, and is also shown to be consistent with the Feynman path-integral method. We conclude that the ``homogeneous'' Lagrangian is a completely adequate basis for covariant Lagrangian theory both in classical and quantum mechanics. The article also explores the analogy with the Fermat theorem of optics, and illustrates a simple invariant notation for the Lagrangian and other four-vector equations.

  7. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  8. Cross-population myelination covariance of human cerebral cortex.

    PubMed

    Ma, Zhiwei; Zhang, Nanyin

    2017-09-01

    Cross-population covariance of brain morphometric quantities provides a measure of interareal connectivity, as it is believed to be determined by the coordinated neurodevelopment of connected brain regions. Although useful, structural covariance analysis predominantly employed bulky morphological measures with mixed compartments, whereas studies of the structural covariance of any specific subdivisions such as myelin are rare. Characterizing myelination covariance is of interest, as it will reveal connectivity patterns determined by coordinated development of myeloarchitecture between brain regions. Using myelin content MRI maps from the Human Connectome Project, here we showed that the cortical myelination covariance was highly reproducible, and exhibited a brain organization similar to that previously revealed by other connectivity measures. Additionally, the myelination covariance network shared common topological features of human brain networks such as small-worldness. Furthermore, we found that the correlation between myelination covariance and resting-state functional connectivity (RSFC) was uniform within each resting-state network (RSN), but could considerably vary across RSNs. Interestingly, this myelination covariance-RSFC correlation was appreciably stronger in sensory and motor networks than cognitive and polymodal association networks, possibly due to their different circuitry structures. This study has established a new brain connectivity measure specifically related to axons, and this measure can be valuable to investigating coordinated myeloarchitecture development. Hum Brain Mapp 38:4730-4743, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. New challenges and opportunities in the eddy-covariance methodology for long-term monitoring networks

    NASA Astrophysics Data System (ADS)

    Papale, Dario; Fratini, Gerardo

    2013-04-01

    Eddy-covariance is the most direct and most commonly applied methodology for measuring exchange fluxes of mass and energy between ecosystems and the atmosphere. In recent years, the number of environmental monitoring stations deploying eddy-covariance systems increased dramatically at the global level, exceeding 500 sites worldwide and covering most climatic and ecological regions. Several long-term environmental research infrastructures such as ICOS, NEON and AmeriFlux selected the eddy-covariance as a method to monitor GHG fluxes and are currently collaboratively working towards defining common measurements standards, data processing approaches, QA/QC procedures and uncertainty estimation strategies, to the aim of increasing defensibility of resulting fluxes and intra and inter-comparability of flux databases. In the meanwhile, the eddy-covariance research community keeps identifying technical and methodological flaws that, in some cases, can introduce - and can have introduced to date - significant biases in measured fluxes or increase their uncertainty. Among those, we identify three issues of presumably greater concern, namely: (1) strong underestimation of water vapour fluxes in closed-path systems, and its dependency on relative humidity; (2) flux biases induced by erroneous measurement of absolute gas concentrations; (3) and systematic errors due to underestimation of vertical wind variance in non-orthogonal anemometers. If not properly addressed, these issues can reduce the quality and reliability of the method, especially as a standard methodology in long-term monitoring networks. In this work, we review the status of the art regarding such problems, and propose new evidences based on field experiments as well as numerical simulations. Our analyses confirm the potential relevance of these issues but also hint at possible coping approaches, to minimize problems during setup design, data collection and post-field flux correction. Corrections are under

  10. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  11. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  12. Dimension from covariance matrices.

    PubMed

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  13. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  14. Parametric Covariance Model for Horizon-Based Optical Navigation

    NASA Technical Reports Server (NTRS)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  15. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  16. On the background independence of two-dimensional topological gravity

    NASA Astrophysics Data System (ADS)

    Imbimbo, Camillo

    1995-04-01

    We formulate two-dimensional topological gravity in a background covariant Lagrangian framework. We derive the Ward identities which characterize the dependence of physical correlators on the background world-sheet metric defining the gauge-slice. We point out the existence of an "anomaly" in Ward identitites involving correlators of observables with higher ghost number. This "anomaly" represents an obstruction for physical correlators to be globally defined forms on moduli space which could be integrated in a background independent way. Starting from the anomalous Ward identities, we derive "descent" equations whose solutions are cocycles of the Lie algebra of the diffeomorphism group with values in the space of local forms on the moduli space. We solve the descent equations and provide explicit formulas for the cocycles, which allow for the definition of background independent integrals of physical correlators on the moduli space.

  17. On the role of covariance information for GRACE K-band observations in the Celestial Mechanics Approach

    NASA Astrophysics Data System (ADS)

    Bentel, Katrin; Meyer, Ulrich; Arnold, Daniel; Jean, Yoomin; Jäggi, Adrian

    2017-04-01

    The Astronomical Institute at the University of Bern (AIUB) derives static and time-variable gravity fields by means of the Celestial Mechanics Approach (CMA) from GRACE (level 1B) data. This approach makes use of the close link between orbit and gravity field determination. GPS-derived kinematic GRACE orbit positions, inter-satellite K-band observations, which are the core observations of GRACE, and accelerometer data are combined to rigorously estimate orbit and spherical harmonic gravity field coefficients in one adjustment step. Pseudo-stochastic orbit parameters are set up to absorb unmodeled noise. The K-band range measurements in along-track direction lead to a much higher correlation of the observations in this direction compared to the other directions and thus, to north-south stripes in the unconstrained gravity field solutions, so-called correlated errors. By using a full covariance matrix for the K-band observations the correlation can be taken into account. One possibility is to derive correlation information from post-processing K-band residuals. This is then used in a second iteration step to derive an improved gravity field solution. We study the effects of pre-defined covariance matrices and residual-derived covariance matrices on the final gravity field product with the CMA.

  18. Errors in Focus? Native and Non-Native Perceptions of Error Salience in Hong Kong Student English - A Case Study.

    ERIC Educational Resources Information Center

    Newbrook, Mark

    1990-01-01

    A study compared the perceptions of two experts from different cultural backgrounds concerning salience of a variety of errors typical of the English written by Hong Kong secondary and college students. A book on English error types written by a Hong-Kong born, fluent Chinese-English bilingual linguist was analyzed for its emphases, and a list of…

  19. Students’ Covariational Reasoning in Solving Integrals’ Problems

    NASA Astrophysics Data System (ADS)

    Harini, N. V.; Fuad, Y.; Ekawati, R.

    2018-01-01

    Covariational reasoning plays an important role to indicate quantities vary in learning calculus. This study investigates students’ covariational reasoning during their studies concerning two covarying quantities in integral problem. Six undergraduate students were chosen to solve problems that involved interpreting and representing how quantities change in tandem. Interviews were conducted to reveal the students’ reasoning while solving covariational problems. The result emphasizes that undergraduate students were able to construct the relation of dependent variables that changes in tandem with the independent variable. However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques.

  20. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1977-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  1. Gram-Schmidt algorithms for covariance propagation

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1975-01-01

    This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.

  2. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  3. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  4. Use of Two-Part Regression Calibration Model to Correct for Measurement Error in Episodically Consumed Foods in a Single-Replicate Study Design: EPIC Case Study

    PubMed Central

    Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487

  5. Earth Observation System Flight Dynamics System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  6. Some unexamined aspects of analysis of covariance in pretest-posttest studies.

    PubMed

    Ganju, Jitendra

    2004-09-01

    The use of an analysis of covariance (ANCOVA) model in a pretest-posttest setting deserves to be studied separately from its use in other (non-pretest-posttest) settings. For pretest-posttest studies, the following points are made in this article: (a) If the familiar change from baseline model accurately describes the data-generating mechanism for a randomized study then it is impossible for unequal slopes to exist. Conversely, if unequal slopes exist, then it implies that the change from baseline model as a data-generating mechanism is inappropriate. An alternative data-generating model should be identified and the validity of the ANCOVA model should be demonstrated. (b) Under the usual assumptions of equal pretest and posttest within-subject error variances, the ratio of the standard error of a treatment contrast from a change from baseline analysis to that from ANCOVA is less than 2(1)/(2). (c) For an observational study it is possible for unequal slopes to exist even if the change from baseline model describes the data-generating mechanism. (d) Adjusting for the pretest variable in observational studies may actually introduce bias where none previously existed.

  7. Threshold regression to accommodate a censored covariate.

    PubMed

    Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A

    2018-06-22

    In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.

  8. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGES

    Friedrich, O.; Seitz, S.; Eifler, T. F.; ...

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  9. Cultural background shapes spatial reference frame proclivity

    PubMed Central

    Goeke, Caspar; Kornpetpanee, Suchada; Köster, Moritz; Fernández-Revelles, Andrés B.; Gramann, Klaus; König, Peter

    2015-01-01

    Spatial navigation is an essential human skill that is influenced by several factors. The present study investigates how gender, age, and cultural background account for differences in reference frame proclivity and performance in a virtual navigation task. Using an online navigation study, we recorded reaction times, error rates (confusion of turning axis), and reference frame proclivity (egocentric vs. allocentric reference frame) of 1823 participants. Reaction times significantly varied with gender and age, but were only marginally influenced by the cultural background of participants. Error rates were in line with these results and exhibited a significant influence of gender and culture, but not age. Participants’ cultural background significantly influenced reference frame selection; the majority of North-Americans preferred an allocentric strategy, while Latin-Americans preferred an egocentric navigation strategy. European and Asian groups were in between these two extremes. Neither the factor of age nor the factor of gender had a direct impact on participants’ navigation strategies. The strong effects of cultural background on navigation strategies without the influence of gender or age underlines the importance of socialized spatial cognitive processes and argues for socio-economic analysis in studies investigating human navigation. PMID:26073656

  10. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  11. An Adaptive Low-Cost INS/GNSS Tightly-Coupled Integration Architecture Based on Redundant Measurement Noise Covariance Estimation.

    PubMed

    Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan

    2017-09-05

    The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes.

  12. An Adaptive Low-Cost INS/GNSS Tightly-Coupled Integration Architecture Based on Redundant Measurement Noise Covariance Estimation

    PubMed Central

    Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan

    2017-01-01

    The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes. PMID:28872629

  13. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  14. AFCI-2.0 Library of Neutron Cross Section Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Herman,M.; Oblozinsky,P.

    2011-06-26

    Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular newmore » LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.« less

  15. Alterations in Anatomical Covariance in the Prematurely Born

    PubMed Central

    Scheinost, Dustin; Kwon, Soo Hyun; Lacadie, Cheryl; Vohr, Betty R.; Schneider, Karen C.; Papademetris, Xenophon; Constable, R. Todd; Ment, Laura R.

    2017-01-01

    Abstract Preterm (PT) birth results in long-term alterations in functional and structural connectivity, but the related changes in anatomical covariance are just beginning to be explored. To test the hypothesis that PT birth alters patterns of anatomical covariance, we investigated brain volumes of 25 PTs and 22 terms at young adulthood using magnetic resonance imaging. Using regional volumetrics, seed-based analyses, and whole brain graphs, we show that PT birth is associated with reduced volume in bilateral temporal and inferior frontal lobes, left caudate, left fusiform, and posterior cingulate for prematurely born subjects at young adulthood. Seed-based analyses demonstrate altered patterns of anatomical covariance for PTs compared with terms. PTs exhibit reduced covariance with R Brodmann area (BA) 47, Broca's area, and L BA 21, Wernicke's area, and white matter volume in the left prefrontal lobe, but increased covariance with R BA 47 and left cerebellum. Graph theory analyses demonstrate that measures of network complexity are significantly less robust in PTs compared with term controls. Volumes in regions showing group differences are significantly correlated with phonological awareness, the fundamental basis for reading acquisition, for the PTs. These data suggest both long-lasting and clinically significant alterations in the covariance in the PTs at young adulthood. PMID:26494796

  16. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. The patterns of genomic variances and covariances across genome for milk production traits between Chinese and Nordic Holstein populations.

    PubMed

    Li, Xiujin; Lund, Mogens Sandø; Janss, Luc; Wang, Chonglong; Ding, Xiangdong; Zhang, Qin; Su, Guosheng

    2017-03-15

    With the development of SNP chips, SNP information provides an efficient approach to further disentangle different patterns of genomic variances and covariances across the genome for traits of interest. Due to the interaction between genotype and environment as well as possible differences in genetic background, it is reasonable to treat the performances of a biological trait in different populations as different but genetic correlated traits. In the present study, we performed an investigation on the patterns of region-specific genomic variances, covariances and correlations between Chinese and Nordic Holstein populations for three milk production traits. Variances and covariances between Chinese and Nordic Holstein populations were estimated for genomic regions at three different levels of genome region (all SNP as one region, each chromosome as one region and every 100 SNP as one region) using a novel multi-trait random regression model which uses latent variables to model heterogeneous variance and covariance. In the scenario of the whole genome as one region, the genomic variances, covariances and correlations obtained from the new multi-trait Bayesian method were comparable to those obtained from a multi-trait GBLUP for all the three milk production traits. In the scenario of each chromosome as one region, BTA 14 and BTA 5 accounted for very large genomic variance, covariance and correlation for milk yield and fat yield, whereas no specific chromosome showed very large genomic variance, covariance and correlation for protein yield. In the scenario of every 100 SNP as one region, most regions explained <0.50% of genomic variance and covariance for milk yield and fat yield, and explained <0.30% for protein yield, while some regions could present large variance and covariance. Although overall correlations between two populations for the three traits were positive and high, a few regions still showed weakly positive or highly negative genomic correlations for

  18. Algebra of constraints for a string in curved background

    NASA Astrophysics Data System (ADS)

    Wess, Julius

    1990-06-01

    A string field theory with curved background develops anomalies and Schwinger terms in the conformal algebra. It is generally believed that these Schwinger terms and anomalies are expressible in terms of the curvature tensor of the background metric 1 and that, therefore, they are covariant under a change of coordinates in the target space. As far as I know, all the relevant computations have been done in special gauges, i.e. in Riemann normal coordinates. The question remains whether this is true in any gauge. We have tried to investigate this problem in a Hamiltonian formulation of the model. A classical Lagrangian serves to define the canonical variables and the classical constraints. They are expressed in terms of the canonical variables and, classically, they are first class. When quantized, an ordering prescription has to be imposed which leads to anomalies and Schwinger terms. We then try to redefine the constraints in such a way that the Schwinger terms depend on the curvature tensor only. The redefinition of the constraints is limited by the requirement that it should be local and that the energy momentum tensor should be conserved. In our approach, it is natural that the constraints are improved, order by order, in the number of derivatives: we find that, up to third order in the derivatives, Schwinger terms and anomalies are expressible in terms of the curvature tensor. In the fourth order of the derivaties however, we find a contribution to the Schwinger terms that cannot be removed by a redefinition and that cannot be cast in a covariant form. The anomaly on the other hand is fully expressible in terms of the curvature scalar. The energy momentum tensor ceases to be symmetric which indicates a Lorentz anomaly as well. The question remains if the Schwinger terms take a covariant form if we allow Einstein anomalies as well 2.

  19. Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments

    ERIC Educational Resources Information Center

    Barker, Lynne A.; Andrade, Jackie

    2006-01-01

    In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…

  20. Bayes Factor Covariance Testing in Item Response Models.

    PubMed

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  1. Alterations in Anatomical Covariance in the Prematurely Born.

    PubMed

    Scheinost, Dustin; Kwon, Soo Hyun; Lacadie, Cheryl; Vohr, Betty R; Schneider, Karen C; Papademetris, Xenophon; Constable, R Todd; Ment, Laura R

    2017-01-01

    Preterm (PT) birth results in long-term alterations in functional and structural connectivity, but the related changes in anatomical covariance are just beginning to be explored. To test the hypothesis that PT birth alters patterns of anatomical covariance, we investigated brain volumes of 25 PTs and 22 terms at young adulthood using magnetic resonance imaging. Using regional volumetrics, seed-based analyses, and whole brain graphs, we show that PT birth is associated with reduced volume in bilateral temporal and inferior frontal lobes, left caudate, left fusiform, and posterior cingulate for prematurely born subjects at young adulthood. Seed-based analyses demonstrate altered patterns of anatomical covariance for PTs compared with terms. PTs exhibit reduced covariance with R Brodmann area (BA) 47, Broca's area, and L BA 21, Wernicke's area, and white matter volume in the left prefrontal lobe, but increased covariance with R BA 47 and left cerebellum. Graph theory analyses demonstrate that measures of network complexity are significantly less robust in PTs compared with term controls. Volumes in regions showing group differences are significantly correlated with phonological awareness, the fundamental basis for reading acquisition, for the PTs. These data suggest both long-lasting and clinically significant alterations in the covariance in the PTs at young adulthood. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Massive data compression for parameter-dependent covariance matrices

    NASA Astrophysics Data System (ADS)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  3. Hawking radiation, covariant boundary conditions, and vacuum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Rabin; Kulkarni, Shailesh

    2009-04-15

    The basic characteristics of the covariant chiral current and the covariant chiral energy-momentum tensor are obtained from a chiral effective action. These results are used to justify the covariant boundary condition used in recent approaches of computing the Hawking flux from chiral gauge and gravitational anomalies. We also discuss a connection of our results with the conventional calculation of nonchiral currents and stress tensors in different (Unruh, Hartle-Hawking and Boulware) states.

  4. Eddy covariance flux measurements of gaseous elemental mercury using cavity ring-down spectroscopy.

    PubMed

    Pierce, Ashley M; Moore, Christopher W; Wohlfahrt, Georg; Hörtnagl, Lukas; Kljun, Natascha; Obrist, Daniel

    2015-02-03

    A newly developed pulsed cavity ring-down spectroscopy (CRDS) system for measuring atmospheric gaseous elemental mercury (GEM) concentrations at high temporal resolution (25 Hz) was used to successfully conduct the first eddy covariance (EC) flux measurements of GEM. GEM is the main gaseous atmospheric form, and quantification of bidirectional exchange between the Earth's surface and the atmosphere is important because gas exchange is important on a global scale. For example, surface GEM emissions from natural sources, legacy emissions, and re-emission of previously deposited anthropogenic pollution may exceed direct primary anthropogenic emissions. Using the EC technique for flux measurements requires subsecond measurements, which so far has not been feasible because of the slow time response of available instrumentation. The CRDS system measured GEM fluxes, which were compared to fluxes measured with the modified Bowen ratio (MBR) and a dynamic flux chamber (DFC). Measurements took place near Reno, NV, in September and October 2012 encompassing natural, low-mercury (Hg) background soils and Hg-enriched soils. During nine days of measurements with deployment of Hg-enriched soil in boxes within 60 m upwind of the EC tower, the covariance of GEM concentration and vertical wind speed was measured, showing that EC fluxes over an Hg-enriched area were detectable. During three separate days of flux measurements over background soils (without Hg-enriched soils), no covariance was detected, indicating fluxes below the detection limit. When fluxes were measurable, they strongly correlated with wind direction; the highest fluxes occurred when winds originated from the Hg-enriched area. Comparisons among the three methods showed good agreement in direction (e.g., emission or deposition) and magnitude, especially when measured fluxes originated within the Hg-enriched soil area. EC fluxes averaged 849 ng m(-2) h(-1), compared to DFC fluxes of 1105 ng m(-2) h(-1) and MBR fluxes

  5. Continuous Covariate Imbalance and Conditional Power for Clinical Trial Interim Analyses

    PubMed Central

    Ciolino, Jody D.; Martin, Renee' H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

    2014-01-01

    Oftentimes valid statistical analyses for clinical trials involve adjustment for known influential covariates, regardless of imbalance observed in these covariates at baseline across treatment groups. Thus, it must be the case that valid interim analyses also properly adjust for these covariates. There are situations, however, in which covariate adjustment is not possible, not planned, or simply carries less merit as it makes inferences less generalizable and less intuitive. In this case, covariate imbalance between treatment groups can have a substantial effect on both interim and final primary outcome analyses. This paper illustrates the effect of influential continuous baseline covariate imbalance on unadjusted conditional power (CP), and thus, on trial decisions based on futility stopping bounds. The robustness of the relationship is illustrated for normal, skewed, and bimodal continuous baseline covariates that are related to a normally distributed primary outcome. Results suggest that unadjusted CP calculations in the presence of influential covariate imbalance require careful interpretation and evaluation. PMID:24607294

  6. Remediating Non-Positive Definite State Covariances for Collision Probability Estimation

    NASA Technical Reports Server (NTRS)

    Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.

    2017-01-01

    The NASA Conjunction Assessment Risk Analysis team estimates the probability of collision (Pc) for a set of Earth-orbiting satellites. The Pc estimation software processes satellite position+velocity states and their associated covariance matri-ces. On occasion, the software encounters non-positive definite (NPD) state co-variances, which can adversely affect or prevent the Pc estimation process. Inter-polation inaccuracies appear to account for the majority of such covariances, alt-hough other mechanisms contribute also. This paper investigates the origin of NPD state covariance matrices, three different methods for remediating these co-variances when and if necessary, and the associated effects on the Pc estimation process.

  7. Frame covariant nonminimal multifield inflation

    NASA Astrophysics Data System (ADS)

    Karamitsos, Sotirios; Pilaftsis, Apostolos

    2018-02-01

    We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky-De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.

  8. DISSCO: direct imputation of summary statistics allowing covariates.

    PubMed

    Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun

    2015-08-01

    Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9-15.2% for variants with minor allele frequency <5%. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. A three domain covariance framework for EEG/MEG data.

    PubMed

    Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C

    2015-10-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Multiple feature fusion via covariance matrix for visual tracking

    NASA Astrophysics Data System (ADS)

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui

    2018-04-01

    Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.

  11. Covariance Matrix Evaluations for Independent Mass Fission Yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less

  12. Prediction of lethal/effective concentration/dose in the presence of multiple auxiliary covariates and components of variance

    USGS Publications Warehouse

    Gutreuter, S.; Boogaard, M.A.

    2007-01-01

    Predictors of the percentile lethal/effective concentration/dose are commonly used measures of efficacy and toxicity. Typically such quantal-response predictors (e.g., the exposure required to kill 50% of some population) are estimated from simple bioassays wherein organisms are exposed to a gradient of several concentrations of a single agent. The toxicity of an agent may be influenced by auxiliary covariates, however, and more complicated experimental designs may introduce multiple variance components. Prediction methods lag examples of those cases. A conventional two-stage approach consists of multiple bivariate predictions of, say, medial lethal concentration followed by regression of those predictions on the auxiliary covariates. We propose a more effective and parsimonious class of generalized nonlinear mixed-effects models for prediction of lethal/effective dose/concentration from auxiliary covariates. We demonstrate examples using data from a study regarding the effects of pH and additions of variable quantities 2???,5???-dichloro-4???- nitrosalicylanilide (niclosamide) on the toxicity of 3-trifluoromethyl-4- nitrophenol to larval sea lamprey (Petromyzon marinus). The new models yielded unbiased predictions and root-mean-squared errors (RMSEs) of prediction for the exposure required to kill 50 and 99.9% of some population that were 29 to 82% smaller, respectively, than those from the conventional two-stage procedure. The model class is flexible and easily implemented using commonly available software. ?? 2007 SETAC.

  13. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  14. Perturbative approach to covariance matrix of the matter power spectrum

    NASA Astrophysics Data System (ADS)

    Mohammed, Irshad; Seljak, Uroš; Vlah, Zvonimir

    2017-04-01

    We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (supersample variance) and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10 per cent level up to k ˜ 1 h Mpc-1. We show that all the connected components are dominated by the large-scale modes (k < 0.1 h Mpc-1), regardless of the value of the wave vectors k, k΄ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher k, it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.

  15. Cox model with interval-censored covariate in cohort studies.

    PubMed

    Ahn, Soohyun; Lim, Johan; Paik, Myunghee Cho; Sacco, Ralph L; Elkind, Mitchell S

    2018-05-18

    In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow-up. The status of the secondary event serves as a time-varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time-varying covariates. While information on a typical time-varying covariate is missing for entire follow-up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval-censored. One may view interval-censored covariate of the secondary event status as missing time-varying covariates, yet missingness is partial since partial information is provided throughout the follow-up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval-censored covariates in the Cox proportional hazards model, we propose an available-data estimator, a doubly robust-type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing

    PubMed Central

    2018-01-01

    Linear regression is a basic tool in mobile robotics, since it enables accurate estimation of straight lines from range-bearing scans or in digital images, which is a prerequisite for reliable data association and sensor fusing in the context of feature-based SLAM. This paper discusses, extends and compares existing algorithms for line fitting applicable also in the case of strong covariances between the coordinates at each single data point, which must not be neglected if range-bearing sensors are used. Besides, in particular, the determination of the covariance matrix is considered, which is required for stochastic modeling. The main contribution is a new error model of straight lines in closed form for calculating quickly and reliably the covariance matrix dependent on just a few comprehensible and easily-obtainable parameters. The model can be applied widely in any case when a line is fitted from a number of distinct points also without a priori knowledge of the specific measurement noise. By means of extensive simulations, the performance and robustness of the new model in comparison to existing approaches is shown. PMID:29673205

  17. Interspecific analysis of covariance structure in the masticatory apparatus of galagos.

    PubMed

    Vinyard, Christopher J

    2007-01-01

    The primate masticatory apparatus (MA) is a functionally integrated set of features, each of which performs important functions in biting, ingestive, and chewing behaviors. A comparison of morphological covariance structure among species for these MA features will help us to further understand the evolutionary history of this region. In this exploratory analysis, the covariance structure of the MA is compared across seven galago species to investigate 1) whether there are differences in covariance structure in this region, and 2) if so, how has this covariation changed with respect to size, MA form, diet, and/or phylogeny? Ten measurements of the MA functionally related to bite force production and load resistance were obtained from 218 adults of seven galago species. Correlation matrices were generated for these 10 dimensions and compared among species via matrix correlations and Mantel tests. Subsequently, pairwise covariance disparity in the MA was estimated as a measure of difference in covariance structure between species. Covariance disparity estimates were correlated with pairwise distances related to differences in body size, MA size and shape, genetic distance (based on cytochrome-b sequences) and percentage of dietary foods to determine whether one or more of these factors is linked to differences in covariance structure. Galagos differ in MA covariance structure. Body size appears to be a major factor correlated with differences in covariance structure among galagos. The largest galago species, Otolemur crassicaudatus, exhibits large differences in body mass and covariance structure relative to other galagos, and thus plays a primary role in creating this association. MA size and shape do not correlate with covariance structure when body mass is held constant. Diet also shows no association. Genetic distance is significantly negatively correlated with covariance disparity when body mass is held constant, but this correlation appears to be a function of

  18. Covariant fields on anti-de Sitter spacetimes

    NASA Astrophysics Data System (ADS)

    Cotăescu, Ion I.

    2018-02-01

    The covariant free fields of any spin on anti-de Sitter (AdS) spacetimes are studied, pointing out that these transform under isometries according to covariant representations (CRs) of the AdS isometry group, induced by those of the Lorentz group. Applying the method of ladder operators, it is shown that the CRs with unique spin are equivalent with discrete unitary irreducible representations (UIRs) of positive energy of the universal covering group of the isometry one. The action of the Casimir operators is studied finding how the weights of these representations (reps.) may depend on the mass and spin of the covariant field. The conclusion is that on AdS spacetime, one cannot formulate a universal mass condition as in special relativity.

  19. Tonic and phasic co-variation of peripheral arousal indices in infants

    PubMed Central

    Wass, S.V.; de Barbaro, K.; Clackson, K.

    2015-01-01

    Tonic and phasic differences in peripheral autonomic nervous system (ANS) indicators strongly predict differences in attention and emotion regulation in developmental populations. However, virtually all previous research has been based on individual ANS measures, which poses a variety of conceptual and methodlogical challenges to comparing results across studies. Here we recorded heart rate, electrodermal activity (EDA), pupil size, head movement velocity and peripheral accelerometry concurrently while a cohort of 37 typical 12-month-old infants completed a mixed assessment battery lasting approximately 20 min per participant. We analysed covariation of these autonomic indices in three ways: first, tonic (baseline) arousal; second, co-variation in spontaneous (phasic) changes during testing; third, phasic co-variation relative to an external stimulus event. We found that heart rate, head velocity and peripheral accelerometry showed strong positive co-variation across all three analyses. EDA showed no co-variation in tonic activity levels but did show phasic positive co-variation with other measures, that appeared limited to sections of high but not low general arousal. Tonic pupil size showed significant positive covariation, but phasic pupil changes were inconsistent. We conclude that: (i) there is high covariation between autonomic indices in infants, but that EDA may only be sensitive at extreme arousal levels, (ii) that tonic pupil size covaries with other indices, but does not show predicted patterns of phasic change and (iii) that motor activity appears to be a good proxy measure of ANS activity. The strongest patterns of covariation were observed using epoch durations of 40 s per epoch, although significant covariation between indices was also observed using shorter epochs (1 and 5 s). PMID:26316360

  20. The impact of covariance misspecification in group-based trajectory models for longitudinal data with non-stationary covariance structure.

    PubMed

    Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C

    2017-08-01

    One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.

  1. Perturbative approach to covariance matrix of the matter power spectrum

    DOE PAGES

    Mohammed, Irshad; Seljak, Uros; Vlah, Zvonimir

    2016-12-14

    Here, we evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (beat coupling or super-sample variance), and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10\\% level up tomore » $$k \\sim 1 h {\\rm Mpc^{-1}}$$. We also show that all the connected components are dominated by the large-scale modes ($$k<0.1 h {\\rm Mpc^{-1}}$$), regardless of the value of the wavevectors $$k,\\, k'$$ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher $k$ it is dominated by a single eigenmode. Furthermore, the full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.« less

  2. A fully covariant information-theoretic ultraviolet cutoff for scalar fields in expanding Friedmann Robertson Walker spacetimes

    NASA Astrophysics Data System (ADS)

    Kempf, A.; Chatwin-Davies, A.; Martin, R. T. W.

    2013-02-01

    While a natural ultraviolet cutoff, presumably at the Planck length, is widely assumed to exist in nature, it is nontrivial to implement a minimum length scale covariantly. This is because the presence of a fixed minimum length needs to be reconciled with the ability of Lorentz transformations to contract lengths. In this paper, we implement a fully covariant Planck scale cutoff by cutting off the spectrum of the d'Alembertian. In this scenario, consistent with Lorentz contractions, wavelengths that are arbitrarily smaller than the Planck length continue to exist. However, the dynamics of modes of wavelengths that are significantly smaller than the Planck length possess a very small bandwidth. This has the effect of freezing the dynamics of such modes. While both wavelengths and bandwidths are frame dependent, Lorentz contraction and time dilation conspire to make the freezing of modes of trans-Planckian wavelengths covariant. In particular, we show that this ultraviolet cutoff can be implemented covariantly also in curved spacetimes. We focus on Friedmann Robertson Walker spacetimes and their much-discussed trans-Planckian question: The physical wavelength of each comoving mode was smaller than the Planck scale at sufficiently early times. What was the mode's dynamics then? Here, we show that in the presence of the covariant UV cutoff, the dynamical bandwidth of a comoving mode is essentially zero up until its physical wavelength starts exceeding the Planck length. In particular, we show that under general assumptions, the number of dynamical degrees of freedom of each comoving mode all the way up to some arbitrary finite time is actually finite. Our results also open the way to calculating the impact of this natural UV cutoff on inflationary predictions for the cosmic microwave background.

  3. Covariance Function for Nearshore Wave Assimilation Systems

    DTIC Science & Technology

    2018-01-30

    covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications, the covariance function depends primarily on...case of missing values at the compiled time series, the gaps were filled by weighted interpolation. The weights depend on the number of the...averaging, in order to create the continuous time series, filters out the dependency on the instantaneous meteorological and oceanographic conditions

  4. Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigeti, David Edward; Williams, Brian J.; Parsons, D. Kent

    2016-10-18

    Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances domore » not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.« less

  5. Reconstruction of primordial tensor power spectra from B -mode polarization of the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Takashi; Komatsu, Eiichiro; Hazumi, Masashi; Sasaki, Misao

    2018-06-01

    Given observations of the B -mode polarization power spectrum of the cosmic microwave background (CMB), we can reconstruct power spectra of primordial tensor modes from the early Universe without assuming their functional form such as a power-law spectrum. The shape of the reconstructed spectra can then be used to probe the origin of tensor modes in a model-independent manner. We use the Fisher matrix to calculate the covariance matrix of tensor power spectra reconstructed in bins. We find that the power spectra are best reconstructed at wave numbers in the vicinity of k ≈6 ×10-4 and 5 ×10-3 Mpc-1 , which correspond to the "reionization bump" at ℓ≲6 and "recombination bump" at ℓ≈80 of the CMB B -mode power spectrum, respectively. The error bar between these two wave numbers is larger because of the lack of the signal between the reionization and recombination bumps. The error bars increase sharply toward smaller (larger) wave numbers because of the cosmic variance (CMB lensing and instrumental noise). To demonstrate the utility of the reconstructed power spectra, we investigate whether we can distinguish between various sources of tensor modes including those from the vacuum metric fluctuation and SU(2) gauge fields during single-field slow-roll inflation, open inflation, and massive gravity inflation. The results depend on the model parameters, but we find that future CMB experiments are sensitive to differences in these models. We make our calculation tool available online.

  6. Multilevel Models for Intensive Longitudinal Data with Heterogeneous Autoregressive Errors: The Effect of Misspecification and Correction with Cholesky Transformation

    PubMed Central

    Jahng, Seungmin; Wood, Phillip K.

    2017-01-01

    Intensive longitudinal studies, such as ecological momentary assessment studies using electronic diaries, are gaining popularity across many areas of psychology. Multilevel models (MLMs) are most widely used analytical tools for intensive longitudinal data (ILD). Although ILD often have individually distinct patterns of serial correlation of measures over time, inferences of the fixed effects, and random components in MLMs are made under the assumption that all variance and autocovariance components are homogenous across individuals. In the present study, we introduced a multilevel model with Cholesky transformation to model ILD with individually heterogeneous covariance structure. In addition, the performance of the transformation method and the effects of misspecification of heterogeneous covariance structure were investigated through a Monte Carlo simulation. We found that, if individually heterogeneous covariances are incorrectly assumed as homogenous independent or homogenous autoregressive, MLMs produce highly biased estimates of the variance of random intercepts and the standard errors of the fixed intercept and the fixed effect of a level 2 covariate when the average autocorrelation is high. For intensive longitudinal data with individual specific residual covariance, the suggested transformation method showed lower bias in those estimates than the misspecified models when the number of repeated observations within individuals is 50 or more. PMID:28286490

  7. Eliciting Systematic Rule Use in Covariation Judgment [the Early Years].

    ERIC Educational Resources Information Center

    Shaklee, Harriet; Paszek, Donald

    Related research suggests that children may show some simple understanding of event covariations by the early elementary school years. The present experiments use a rule analysis methodology to investigate covariation judgments of children in this age range. In Experiment 1, children in second, third and fourth grade judged covariations on 12…

  8. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    PubMed

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Robust infrared targets tracking with covariance matrix representation

    NASA Astrophysics Data System (ADS)

    Cheng, Jian

    2009-07-01

    Robust infrared target tracking is an important and challenging research topic in many military and security applications, such as infrared imaging guidance, infrared reconnaissance, scene surveillance, etc. To effectively tackle the nonlinear and non-Gaussian state estimation problems, particle filtering is introduced to construct the theory framework of infrared target tracking. Under this framework, the observation probabilistic model is one of main factors for infrared targets tracking performance. In order to improve the tracking performance, covariance matrices are introduced to represent infrared targets with the multi-features. The observation probabilistic model can be constructed by computing the distance between the reference target's and the target samples' covariance matrix. Because the covariance matrix provides a natural tool for integrating multiple features, and is scale and illumination independent, target representation with covariance matrices can hold strong discriminating ability and robustness. Two experimental results demonstrate the proposed method is effective and robust for different infrared target tracking, such as the sensor ego-motion scene, and the sea-clutter scene.

  10. Influence of model errors in optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  11. Extracting harmonic signal from a chaotic background with local linear model

    NASA Astrophysics Data System (ADS)

    Li, Chenlong; Su, Liyun

    2017-02-01

    In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.

  12. Model-based influences on humans’ choices and striatal prediction errors

    PubMed Central

    Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.

    2011-01-01

    Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563

  13. Model-based influences on humans' choices and striatal prediction errors.

    PubMed

    Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J

    2011-03-24

    The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Double gauge invariance and covariantly-constant vector fields in Weyl geometry

    NASA Astrophysics Data System (ADS)

    Kassandrov, Vladimir V.; Rizcallah, Joseph A.

    2014-08-01

    The wave equation and equations of covariantly-constant vector fields (CCVF) in spaces with Weyl nonmetricity turn out to possess, in addition to the canonical conformal-gauge, a gauge invariance of another type. On a Minkowski metric background, the CCVF system alone allows us to pin down the Weyl 4-metricity vector, identified herein with the electromagnetic potential. The fundamental solution is given by the ordinary Lienard-Wiechert field, in particular, by the Coulomb distribution for a charge at rest. Unlike the latter, however, the magnitude of charge is necessarily unity, "elementary", and charges of opposite signs correspond to retarded and advanced potentials respectively, thus establishing a direct connection between the particle/antiparticle asymmetry and the "arrow of time".

  15. Empirical Performance of Covariates in Education Observational Studies

    ERIC Educational Resources Information Center

    Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate

    2017-01-01

    This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…

  16. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  17. Advanced error diagnostics of the CMAQ and Chimere modelling systems within the AQMEII3 model evaluation framework

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano

    2017-09-01

    The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone

  18. Conditional Covariance Theory and Detect for Polytomous Items

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2007-01-01

    This paper extends the theory of conditional covariances to polytomous items. It has been proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, given an appropriately chosen composite is positive if, and only if, the two items…

  19. The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.

    PubMed

    Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J

    2018-03-23

    In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.

  20. The Impact of a Patient Safety Program on Medical Error Reporting

    DTIC Science & Technology

    2005-05-01

    307 The Impact of a Patient Safety Program on Medical Error Reporting Donald R. Woolever Abstract Background: In response to the occurrence of...a sentinel event—a medical error with serious consequences—Eglin U.S. Air Force (USAF) Regional Hospital developed and implemented a patient safety...communication, teamwork, and reporting. Objective: To determine the impact of a patient safety program on patterns of medical error reporting. Methods: This

  1. Assimilation of surface NO2 and O3 observations into the SILAM chemistry transport model

    NASA Astrophysics Data System (ADS)

    Vira, J.; Sofiev, M.

    2014-08-01

    This paper describes assimilation of trace gas observations into the chemistry transport model SILAM using the 3D-Var method. Assimilation results for year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the Airbase observation database, which provides the observational dataset used in this study. Attention is paid to the background and observation error covariance matrices, which are obtained primarily by iterative application of a posteriori diagnostics. The diagnostics are computed separately for two months representing summer and winter conditions, and further disaggregated by time of day. This allows deriving background and observation error covariance definitions which include both seasonal and diurnal variation. The consistency of the obtained covariance matrices is verified using χ2 diagnostics. The analysis scores are computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values is improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.

  2. Space shuttle launch era spacecraft injection errors and DSN initial acquisition

    NASA Technical Reports Server (NTRS)

    Khatib, A. R.; Berman, A. L.; Wackley, J. A.

    1981-01-01

    The initial acquisition of a spacecraft by the Deep Space Network (DSN) is a critical mission event. This results from the importance of rapidly evaluating the health and trajectory of a spacecraft in the event that immediate corrective action might be required. Further, the DSN initial acquisition is always complicated by the most extreme tracking rates of the mission. The DSN initial acquisition characteristics will change considerably in the upcoming space shuttle launch era. How given injection errors at spacecraft separation from the upper stage launch vehicle (carried into orbit by the space shuttle) impact the DSN initial acquisition, and how this information can be factored into injection accuracy requirements to be levied on the Space Transportation System (STS) is addressed. The approach developed begins with the DSN initial acquisition parameters, generates a covariance matrix, and maps this covariance matrix backward to the spacecraft injection, thereby greatly simplifying the task of levying accuracy requirements on the STS, by providing such requirements in a format both familiar and convenient to STS.

  3. Phenotypic covariance at species' borders.

    PubMed

    Caley, M Julian; Cripps, Edward; Game, Edward T

    2013-05-28

    Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species' borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species' borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future.

  4. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.

    PubMed

    Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L

    2015-12-30

    Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Solving the Integral of Quadratic Forms of Covariance Matrices for Applications in Polarimetric Radar Imagery

    NASA Astrophysics Data System (ADS)

    Marino, Armando; Hajnsek, Irena

    2015-04-01

    In this work, the solution of quadratic forms with special application to polarimetric and interferometric covariance matrices is investigated. An analytical solution for the integral of a single quadratic form is derived. Additionally, the integral of the Pol-InSAR coherence (expressed as combination of quadratic forms) is investigated. An approximation for such integral is proposed and defined as Trace coherence. Such approximation is tested on real data to verify that the error is acceptable. The trace coherence can be used for tackle problems related to change detection. Moreover, the use of the Trace coherence in model inversion (as for the RVoG three stage inversion) will be investigated in the future.

  6. The Performance Analysis Based on SAR Sample Covariance Matrix

    PubMed Central

    Erten, Esra

    2012-01-01

    Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976

  7. A consistent covariant quantization of the Brink-Schwarz superparticle

    NASA Astrophysics Data System (ADS)

    Eisenberg, Yeshayahu

    1992-02-01

    We perform the covariant quantization of the ten-dimensional Brink-Schwarz superparticle by reducing it to a system whose constraints are all first class, covariant and have only two levels of reducibility. Research supported by the Rothschild Fellowship.

  8. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  9. Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins

    NASA Astrophysics Data System (ADS)

    Tolwinski-Ward, S. E.; Wang, D.

    2015-12-01

    Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.

  10. Using machine learning to assess covariate balance in matching studies.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    In order to assess the effectiveness of matching approaches in observational studies, investigators typically present summary statistics for each observed pre-intervention covariate, with the objective of showing that matching reduces the difference in means (or proportions) between groups to as close to zero as possible. In this paper, we introduce a new approach to distinguish between study groups based on their distributions of the covariates using a machine-learning algorithm called optimal discriminant analysis (ODA). Assessing covariate balance using ODA as compared with the conventional method has several key advantages: the ability to ascertain how individuals self-select based on optimal (maximum-accuracy) cut-points on the covariates; the application to any variable metric and number of groups; its insensitivity to skewed data or outliers; and the use of accuracy measures that can be widely applied to all analyses. Moreover, ODA accepts analytic weights, thereby extending the assessment of covariate balance to any study design where weights are used for covariate adjustment. By comparing the two approaches using empirical data, we are able to demonstrate that using measures of classification accuracy as balance diagnostics produces highly consistent results to those obtained via the conventional approach (in our matched-pairs example, ODA revealed a weak statistically significant relationship not detected by the conventional approach). Thus, investigators should consider ODA as a robust complement, or perhaps alternative, to the conventional approach for assessing covariate balance in matching studies. © 2016 John Wiley & Sons, Ltd.

  11. Cox regression analysis with missing covariates via nonparametric multiple imputation.

    PubMed

    Hsu, Chiu-Hsieh; Yu, Mandi

    2018-01-01

    We consider the situation of estimating Cox regression in which some covariates are subject to missing, and there exists additional information (including observed event time, censoring indicator and fully observed covariates) which may be predictive of the missing covariates. We propose to use two working regression models: one for predicting the missing covariates and the other for predicting the missing probabilities. For each missing covariate observation, these two working models are used to define a nearest neighbor imputing set. This set is then used to non-parametrically impute covariate values for the missing observation. Upon the completion of imputation, Cox regression is performed on the multiply imputed datasets to estimate the regression coefficients. In a simulation study, we compare the nonparametric multiple imputation approach with the augmented inverse probability weighted (AIPW) method, which directly incorporates the two working models into estimation of Cox regression, and the predictive mean matching imputation (PMM) method. We show that all approaches can reduce bias due to non-ignorable missing mechanism. The proposed nonparametric imputation method is robust to mis-specification of either one of the two working models and robust to mis-specification of the link function of the two working models. In contrast, the PMM method is sensitive to misspecification of the covariates included in imputation. The AIPW method is sensitive to the selection probability. We apply the approaches to a breast cancer dataset from Surveillance, Epidemiology and End Results (SEER) Program.

  12. A mesoscale hybrid data assimilation system based on the JMA nonhydrostatic model

    NASA Astrophysics Data System (ADS)

    Ito, K.; Kunii, M.; Kawabata, T. T.; Saito, K. K.; Duc, L. L.

    2015-12-01

    This work evaluates the potential of a hybrid ensemble Kalman filter and four-dimensional variational (4D-Var) data assimilation system for predicting severe weather events from a deterministic point of view. This hybrid system is an adjoint-based 4D-Var system using a background error covariance matrix constructed from the mixture of a so-called NMC method and perturbations in a local ensemble transform Kalman filter data assimilation system, both of which are based on the Japan Meteorological Agency nonhydrostatic model. To construct the background error covariance matrix, we investigated two types of schemes. One is a spatial localization scheme and the other is neighboring ensemble approach, which regards the result at a horizontally spatially shifted point in each ensemble member as that obtained from a different realization of ensemble simulation. An assimilation of a pseudo single-observation located to the north of a tropical cyclone (TC) yielded an analysis increment of wind and temperature physically consistent with what is expected for a mature TC in both hybrid systems, whereas an analysis increment in a 4D-Var system using a static background error covariance distorted a structure of the mature TC. Real data assimilation experiments applied to 4 TCs and 3 local heavy rainfall events showed that hybrid systems and EnKF provided better initial conditions than the NMC-based 4D-Var, both for predicting the intensity and track forecast of TCs and for the location and amount of local heavy rainfall events.

  13. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  14. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  15. Covariance and correlation estimation in electron-density maps.

    PubMed

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  16. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    PubMed

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. A study of perturbations in scalar-tensor theory using 1 + 3 covariant approach

    NASA Astrophysics Data System (ADS)

    Ntahompagaze, Joseph; Abebe, Amare; Mbonye, Manasse

    This work discusses scalar-tensor theories of gravity, with a focus on the Brans-Dicke sub-class, and one that also takes note of the latter’s equivalence with f(R) gravitation theories. A 1 + 3 covariant formalism is used in this case to discuss covariant perturbations on a background Friedmann-Laimaître-Robertson-Walker (FLRW) spacetime. Linear perturbation equations are developed based on gauge-invariant gradient variables. Both scalar and harmonic decompositions are applied to obtain second-order equations. These equations can then be used for further analysis of the behavior of the perturbation quantities in such a scalar-tensor theory of gravitation. Energy density perturbations are studied for two systems, namely for a scalar fluid-radiation system and for a scalar fluid-dust system, for Rn models. For the matter-dominated era, it is shown that the dust energy density perturbations grow exponentially, a result which agrees with those already existing in the literatures. In the radiation-dominated era, it is found that the behavior of the radiation energy-density perturbations is oscillatory, with growing amplitudes for n > 1, and with decaying amplitudes for 0 < n < 1. This is a new result.

  18. Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties

    PubMed Central

    Chi, Eric C.; Lange, Kenneth

    2014-01-01

    Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662

  19. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  1. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve

  2. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  3. Parametric number covariance in quantum chaotic spectra.

    PubMed

    Vinayak; Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.

  4. Cortisol Covariation Within Parents of Young Children: Moderation by Relationship Aggression

    PubMed Central

    Saxbe, Darby E.; Adam, Emma K.; Dunkel Schetter, Christine; Guardino, Christine M.; Simon, Clarissa; McKinney, Chelsea O.; Shalowitz, Madeleine U.; Shriver, Eunice Kennedy

    2015-01-01

    Covariation in diurnal cortisol has been observed in several studies of cohabiting couples. In two such studies (Liu et al, 2013, Saxbe & Repetti, 2010), relationship distress was associated with stronger within-couple correlations, suggesting that couples’ physiological linkage with each other may indicate problematic dyadic functioning. Although intimate partner aggression has been associated with dysregulation in women’s diurnal cortisol, it has not yet been tested as a moderator of within-couple covariation. This study reports on a diverse sample of 122 parents who sampled salivary cortisol on matched days for two years following the birth of an infant. Partners showed strong positive cortisol covariation. In couples with higher levels of partner-perpetrated aggression reported by women at one year postpartum, both women and men had a flatter diurnal decrease in cortisol and stronger correlations with partners’ cortisol sampled at the same timepoints. In other words, relationship aggression was linked both with indices of suboptimal cortisol rhythms in both members of the couples and with stronger within-couple covariation coefficients. These results persisted when relationship satisfaction and demographic covariates were included in the model. During some of the sampling days, some women were pregnant with a subsequent child, but pregnancy did not significantly moderate cortisol levels or within-couple covariation. The findings suggest that couples experiencing relationship aggression have both suboptimal neuroendocrine profiles and stronger covariation. Cortisol covariation is an understudied phenomenon with potential implications for couples’ relationship functioning and physical health. PMID:26298691

  5. Marginalized zero-inflated Poisson models with missing covariates.

    PubMed

    Benecha, Habtamu K; Preisser, John S; Divaris, Kimon; Herring, Amy H; Das, Kalyan

    2018-05-11

    Unlike zero-inflated Poisson regression, marginalized zero-inflated Poisson (MZIP) models for counts with excess zeros provide estimates with direct interpretations for the overall effects of covariates on the marginal mean. In the presence of missing covariates, MZIP and many other count data models are ordinarily fitted using complete case analysis methods due to lack of appropriate statistical methods and software. This article presents an estimation method for MZIP models with missing covariates. The method, which is applicable to other missing data problems, is illustrated and compared with complete case analysis by using simulations and dental data on the caries preventive effects of a school-based fluoride mouthrinse program. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A multi-pixel InSAR time series analysis method: Simultaneous estimation of atmospheric noise, orbital errors and deformation

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2016-12-01

    InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests

  7. Galaxy two-point covariance matrix estimation for next generation surveys

    NASA Astrophysics Data System (ADS)

    Howlett, Cullan; Percival, Will J.

    2017-12-01

    We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.

  8. Digital simulation of hybrid loop operation in RFI backgrounds.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.

    1972-01-01

    A digital computer model for Monte-Carlo simulation of an imperfect second-order hybrid phase-locked loop (PLL) operating in radio-frequency interference (RFI) and Gaussian noise backgrounds has been developed. Characterization of hybrid loop performance in terms of cycle slipping statistics and phase error variance, through computer simulation, indicates that the hybrid loop has desirable performance characteristics in RFI backgrounds over the conventional PLL or the costas loop.

  9. Study of continuous blood pressure estimation based on pulse transit time, heart rate and photoplethysmography-derived hemodynamic covariates.

    PubMed

    Feng, Jingjie; Huang, Zhongyi; Zhou, Congcong; Ye, Xuesong

    2018-06-01

    It is widely recognized that pulse transit time (PTT) can track blood pressure (BP) over short periods of time, and hemodynamic covariates such as heart rate, stiffness index may also contribute to BP monitoring. In this paper, we derived a proportional relationship between BP and PPT -2 and proposed an improved method adopting hemodynamic covariates in addition to PTT for continuous BP estimation. We divided 28 subjects from the Multi-parameter Intelligent Monitoring for Intensive Care database into two groups (with/without cardiovascular diseases) and utilized a machine learning strategy based on regularized linear regression (RLR) to construct BP models with different covariates for corresponding groups. RLR was performed for individuals as the initial calibration, while recursive least square algorithm was employed for the re-calibration. The results showed that errors of BP estimation by our method stayed within the Association of Advancement of Medical Instrumentation limits (- 0.98 ± 6.00 mmHg @ SBP, 0.02 ± 4.98 mmHg @ DBP) when the calibration interval extended to 1200-beat cardiac cycles. In comparison with other two representative studies, Chen's method kept accurate (0.32 ± 6.74 mmHg @ SBP, 0.94 ± 5.37 mmHg @ DBP) using a 400-beat calibration interval, while Poon's failed (- 1.97 ± 10.59 mmHg @ SBP, 0.70 ± 4.10 mmHg @ DBP) when using a 200-beat calibration interval. With additional hemodynamic covariates utilized, our method improved the accuracy of PTT-based BP estimation, decreased the calibration frequency and had the potential for better continuous BP estimation.

  10. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  11. Altered Cerebral Blood Flow Covariance Network in Schizophrenia.

    PubMed

    Liu, Feng; Zhuo, Chuanjun; Yu, Chunshui

    2016-01-01

    Many studies have shown abnormal cerebral blood flow (CBF) in schizophrenia; however, it remains unclear how topological properties of CBF network are altered in this disorder. Here, arterial spin labeling (ASL) MRI was employed to measure resting-state CBF in 96 schizophrenia patients and 91 healthy controls. CBF covariance network of each group was constructed by calculating across-subject CBF covariance between 90 brain regions. Graph theory was used to compare intergroup differences in global and nodal topological measures of the network. Both schizophrenia patients and healthy controls had small-world topology in CBF covariance networks, implying an optimal balance between functional segregation and integration. Compared with healthy controls, schizophrenia patients showed reduced small-worldness, normalized clustering coefficient and local efficiency of the network, suggesting a shift toward randomized network topology in schizophrenia. Furthermore, schizophrenia patients exhibited altered nodal centrality in the perceptual-, affective-, language-, and spatial-related regions, indicating functional disturbance of these systems in schizophrenia. This study demonstrated for the first time that schizophrenia patients have disrupted topological properties in CBF covariance network, which provides a new perspective (efficiency of blood flow distribution between brain regions) for understanding neural mechanisms of schizophrenia.

  12. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  13. Covariant n/sup 2/-plet mass formulas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, A.

    Using a generalized internal symmetry group analogous to the Lorentz group, we have constructed a covariant n/sup 2/-plet mass operator. This operator is built as a scalar matrix in the (n;n*) representation, and its SU(n) breaking parameters are identified as intrinsic boost ones. Its basic properties are: covariance, Hermiticity, positivity, charge conjugation, quark contents, and a self-consistent n/sup 2/-1, 1 mixing. The GMO and the Okubo formulas are obtained by considering two different limits of the same generalized mass formula.

  14. Extraction of wind and temperature information from hybrid 4D-Var assimilation of stratospheric ozone using NAVGEM

    NASA Astrophysics Data System (ADS)

    Allen, Douglas R.; Hoppel, Karl W.; Kuhl, David D.

    2018-03-01

    Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM) hybrid 4-D variational assimilation (4D-Var) data assimilation (DA) system. Ozone can improve the wind and temperature through two different DA mechanisms: (1) through the flow-of-the-day ensemble background error covariance that is blended together with the static background error covariance and (2) via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE), which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error) global hourly ozone data constrains the stratospheric wind and temperature to within ˜ 2 m s-1 and ˜ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study), rather than 0.0 (no ensemble background error covariance) or 1.0 (no static background error covariance), which is consistent with other hybrid DA studies. When perfect global

  15. Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    George, E., Chan, K.

    2012-09-01

    Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for

  16. Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.

    2013-01-01

    The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.

  17. Cortisol covariation within parents of young children: Moderation by relationship aggression.

    PubMed

    Saxbe, Darby E; Adam, Emma K; Schetter, Christine Dunkel; Guardino, Christine M; Simon, Clarissa; McKinney, Chelsea O; Shalowitz, Madeleine U

    2015-12-01

    Covariation in diurnal cortisol has been observed in several studies of cohabiting couples. In two such studies (Liu et al., 2013; Saxbe and Repetti, 2010), relationship distress was associated with stronger within-couple correlations, suggesting that couples' physiological linkage with each other may indicate problematic dyadic functioning. Although intimate partner aggression has been associated with dysregulation in women's diurnal cortisol, it has not yet been tested as a moderator of within-couple covariation. This study reports on a diverse sample of 122 parents who sampled salivary cortisol on matched days for two years following the birth of an infant. Partners showed strong positive cortisol covariation. In couples with higher levels of partner-perpetrated aggression reported by women at one year postpartum, both women and men had a flatter diurnal decrease in cortisol and stronger correlations with partners' cortisol sampled at the same timepoints. In other words, relationship aggression was linked both with indices of suboptimal cortisol rhythms in both members of the couples and with stronger within-couple covariation coefficients. These results persisted when relationship satisfaction and demographic covariates were included in the model. During some of the sampling days, some women were pregnant with a subsequent child, but pregnancy did not significantly moderate cortisol levels or within-couple covariation. The findings suggest that couples experiencing relationship aggression have both suboptimal neuroendocrine profiles and stronger covariation. Cortisol covariation is an understudied phenomenon with potential implications for couples' relationship functioning and physical health. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    PubMed

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  19. Bayesian semiparametric estimation of covariate-dependent ROC curves

    PubMed Central

    Rodríguez, Abel; Martínez, Julissa C.

    2014-01-01

    Receiver operating characteristic (ROC) curves are widely used to measure the discriminating power of medical tests and other classification procedures. In many practical applications, the performance of these procedures can depend on covariates such as age, naturally leading to a collection of curves associated with different covariate levels. This paper develops a Bayesian heteroscedastic semiparametric regression model and applies it to the estimation of covariate-dependent ROC curves. More specifically, our approach uses Gaussian process priors to model the conditional mean and conditional variance of the biomarker of interest for each of the populations under study. The model is illustrated through an application to the evaluation of prostate-specific antigen for the diagnosis of prostate cancer, which contrasts the performance of our model against alternative models. PMID:24174579

  20. Hawking radiation and covariant anomalies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Rabin; Kulkarni, Shailesh

    2008-01-15

    Generalizing the method of Wilczek and collaborators we provide a derivation of Hawking radiation from charged black holes using only covariant gauge and gravitational anomalies. The reliability and universality of the anomaly cancellation approach to Hawking radiation is also discussed.

  1. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error

    PubMed Central

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.

    2017-01-01

    SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018

  2. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  3. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  4. Covariant balance laws in continua with microstructure

    NASA Astrophysics Data System (ADS)

    Yavari, Arash; Marsden, Jerrold E.

    2009-02-01

    The purpose of this paper is to extend the Green-Naghdi-Rivlin balance of energy method to continua with microstructure. The key idea is to replace the group of Galilean transformations with the group of diffeomorphisms of the ambient space. A key advantage is that one obtains in a natural way all the needed balance laws on both the macro and micro levels along with two Doyle-Erickson formulas. We model a structured continuum as a triplet of Riemannian manifolds: a material manifold, the ambient space manifold of material particles and a director field manifold. The Green-Naghdi-Rivlin theorem and its extensions for structured continua are critically reviewed. We show that when the ambient space is Euclidean and when the microstructure manifold is the tangent space of the ambient space manifold, postulating a single balance of energy law and its invariance under time-dependent isometries of the ambient space, one obtains conservation of mass, balances of linear and angular momenta but not a separate balance of linear momentum. We develop a covariant elasticity theory for structured continua by postulating that energy balance is invariant under time-dependent spatial diffeomorphisms of the ambient space, which in this case is the product of two Riemannian manifolds. We then introduce two types of constrained continua in which microstructure manifold is linked to the reference and ambient space manifolds. In the case when at every material point, the microstructure manifold is the tangent space of the ambient space manifold at the image of the material point, we show that the assumption of covariance leads to balances of linear and angular momenta with contributions from both forces and micro-forces along with two Doyle-Ericksen formulas. We show that generalized covariance leads to two balances of linear momentum and a single coupled balance of angular momentum. Using this theory, we covariantly obtain the balance laws for two specific examples, namely elastic

  5. Impact of Assimilation on Heavy Rainfall Simulations Using WRF Model: Sensitivity of Assimilation Results to Background Error Statistics

    NASA Astrophysics Data System (ADS)

    Rakesh, V.; Kantharao, B.

    2017-03-01

    Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events

  6. Structural and Maturational Covariance in Early Childhood Brain Development.

    PubMed

    Geng, Xiujuan; Li, Gang; Lu, Zhaohua; Gao, Wei; Wang, Li; Shen, Dinggang; Zhu, Hongtu; Gilmore, John H

    2017-03-01

    Brain structural covariance networks (SCNs) composed of regions with correlated variation are altered in neuropsychiatric disease and change with age. Little is known about the development of SCNs in early childhood, a period of rapid cortical growth. We investigated the development of structural and maturational covariance networks, including default, dorsal attention, primary visual and sensorimotor networks in a longitudinal population of 118 children after birth to 2 years old and compared them with intrinsic functional connectivity networks. We found that structural covariance of all networks exhibit strong correlations mostly limited to their seed regions. By Age 2, default and dorsal attention structural networks are much less distributed compared with their functional maps. The maturational covariance maps, however, revealed significant couplings in rates of change between distributed regions, which partially recapitulate their functional networks. The structural and maturational covariance of the primary visual and sensorimotor networks shows similar patterns to the corresponding functional networks. Results indicate that functional networks are in place prior to structural networks, that correlated structural patterns in adult may arise in part from coordinated cortical maturation, and that regional co-activation in functional networks may guide and refine the maturation of SCNs over childhood development. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Vacuum fluctuations of the supersymmetric field in curved background

    NASA Astrophysics Data System (ADS)

    Bilić, Neven; Domazet, Silvije; Guberina, Branko

    2012-01-01

    We study a supersymmetric model in curved background spacetime. We calculate the effective action and the vacuum expectation value of the energy momentum tensor using a covariant regularization procedure. A soft supersymmetry breaking induces a nonzero contribution to the vacuum energy density and pressure. Assuming the presence of a cosmic fluid in addition to the vacuum fluctuations of the supersymmetric field an effective equation of state is derived in a self-consistent approach at one loop order. The net effect of the vacuum fluctuations of the supersymmetric fields in the leading adiabatic order is a renormalization of the Newton and cosmological constants.

  8. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  9. Worldline construction of a covariant chiral kinetic theory

    DOE PAGES

    Mueller, Niklas; Venugopalan, Raju

    2017-07-27

    Here, we discuss a novel worldline framework for computations of the chiral magnetic effect (CME) in ultrarelativistic heavy-ion collisions. Starting from the fermion determinant in the QCD effective action, we show explicitly how its real part can be expressed as a supersymmetric worldline action of spinning, colored, Grassmannian particles in background fields. Restricting ourselves for simplicity to spinning particles, we demonstrate how their constrained Hamiltonian dynamics arises for both massless and massive particles. In a semiclassical limit, this gives rise to the covariant generalization of the Bargmann-Michel-Telegdi equation; the derivation of the corresponding Wong equations for colored particles is straightforward.more » In a previous paper [N. Mueller and R. Venugopalan, arXiv:1701.03331.], we outlined how Berry’s phase arises in a nonrelativistic adiabatic limit for massive particles. We extend the discussion here to systems with a finite chemical potential. We discuss a path integral formulation of the relative phase in the fermion determinant that places it on the same footing as the real part. We construct the corresponding anomalous worldline axial-vector current and show in detail how the chiral anomaly appears. Our work provides a systematic framework for a relativistic kinetic theory of chiral fermions in the fluctuating topological backgrounds that generate the CME in a deconfined quark-gluon plasma. Finally, we outline some further applications of this framework in many-body systems.« less

  10. Worldline construction of a covariant chiral kinetic theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Niklas; Venugopalan, Raju

    Here, we discuss a novel worldline framework for computations of the chiral magnetic effect (CME) in ultrarelativistic heavy-ion collisions. Starting from the fermion determinant in the QCD effective action, we show explicitly how its real part can be expressed as a supersymmetric worldline action of spinning, colored, Grassmannian particles in background fields. Restricting ourselves for simplicity to spinning particles, we demonstrate how their constrained Hamiltonian dynamics arises for both massless and massive particles. In a semiclassical limit, this gives rise to the covariant generalization of the Bargmann-Michel-Telegdi equation; the derivation of the corresponding Wong equations for colored particles is straightforward.more » In a previous paper [N. Mueller and R. Venugopalan, arXiv:1701.03331.], we outlined how Berry’s phase arises in a nonrelativistic adiabatic limit for massive particles. We extend the discussion here to systems with a finite chemical potential. We discuss a path integral formulation of the relative phase in the fermion determinant that places it on the same footing as the real part. We construct the corresponding anomalous worldline axial-vector current and show in detail how the chiral anomaly appears. Our work provides a systematic framework for a relativistic kinetic theory of chiral fermions in the fluctuating topological backgrounds that generate the CME in a deconfined quark-gluon plasma. Finally, we outline some further applications of this framework in many-body systems.« less

  11. Covariant conserved currents for scalar-tensor Horndeski theory

    NASA Astrophysics Data System (ADS)

    Schmidt, J.; Bičák, J.

    2018-04-01

    The scalar-tensor theories have become popular recently in particular in connection with attempts to explain present accelerated expansion of the universe, but they have been considered as a natural extension of general relativity long time ago. The Horndeski scalar-tensor theory involving four invariantly defined Lagrangians is a natural choice since it implies field equations involving at most second derivatives. Following the formalisms of defining covariant global quantities and conservation laws for perturbations of spacetimes in standard general relativity, we extend these methods to the general Horndeski theory and find the covariant conserved currents for all four Lagrangians. The current is also constructed in the case of linear perturbations involving both metric and scalar fields. As a specific illustration, we derive a superpotential that leads to the covariantly conserved current in the Branse-Dicke theory.

  12. Earth Observing System Covariance Realism Updates

    NASA Technical Reports Server (NTRS)

    Ojeda Romero, Juan A.; Miguel, Fred

    2017-01-01

    This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.

  13. A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu

    2007-01-01

    Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…

  14. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 1: Error analysis methodology

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.; Thurman, S. W.

    1992-01-01

    An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.

  15. Drug Administration Errors in an Institution for Individuals with Intellectual Disability: An Observational Study

    ERIC Educational Resources Information Center

    van den Bemt, P. M. L. A.; Robertz, R.; de Jong, A. L.; van Roon, E. N.; Leufkens, H. G. M.

    2007-01-01

    Background: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form the final barrier to prevention of errors.…

  16. An Analysis of Lexical Errors of Korean Language Learners: Some American College Learners' Case

    ERIC Educational Resources Information Center

    Kang, Manjin

    2014-01-01

    There has been a huge amount of research on errors of language learners. However, most of them have focused on syntactic errors and those about lexical errors are not found easily despite the importance of lexical learning for the language learners. The case is even rarer for Korean language. In line with this background, this study was designed…

  17. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    PubMed

    Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and

  18. Assessing the Impact of Pre-gpm Microwave Precipitation Observations in the Goddard WRF Ensemble Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Chambon, Philippe; Zhang, Sara Q.; Hou, Arthur Y.; Zupanski, Milija; Cheung, Samson

    2013-01-01

    The forthcoming Global Precipitation Measurement (GPM) Mission will provide next generation precipitation observations from a constellation of satellites. Since precipitation by nature has large variability and low predictability at cloud-resolving scales, the impact of precipitation data on the skills of mesoscale numerical weather prediction (NWP) is largely affected by the characterization of background and observation errors and the representation of nonlinear cloud/precipitation physics in an NWP data assimilation system. We present a data impact study on the assimilation of precipitation-affected microwave (MW) radiances from a pre-GPM satellite constellation using the Goddard WRF Ensemble Data Assimilation System (Goddard WRF-EDAS). A series of assimilation experiments are carried out in a Weather Research Forecast (WRF) model domain of 9 km resolution in western Europe. Sensitivities to observation error specifications, background error covariance estimated from ensemble forecasts with different ensemble sizes, and MW channel selections are examined through single-observation assimilation experiments. An empirical bias correction for precipitation-affected MW radiances is developed based on the statistics of radiance innovations in rainy areas. The data impact is assessed by full data assimilation cycling experiments for a storm event that occurred in France in September 2010. Results show that the assimilation of MW precipitation observations from a satellite constellation mimicking GPM has a positive impact on the accumulated rain forecasts verified with surface radar rain estimates. The case-study on a convective storm also reveals that the accuracy of ensemble-based background error covariance is limited by sampling errors and model errors such as precipitation displacement and unresolved convective scale instability.

  19. Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*

    PubMed Central

    Katsevich, E.; Katsevich, A.; Singer, A.

    2015-01-01

    In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132

  20. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  1. Evaluation of drug administration errors in a teaching hospital

    PubMed Central

    2012-01-01

    Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837

  2. Video based object representation and classification using multiple covariance matrices.

    PubMed

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  3. Merging Multi-model CMIP5/PMIP3 Past-1000 Ensemble Simulations with Tree Ring Proxy Data by Optimal Interpolation Approach

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Luo, Yong; Xing, Pei; Nie, Suping; Tian, Qinhua

    2015-04-01

    Two sets of gridded annual mean surface air temperature in past millennia over the Northern Hemisphere was constructed employing optimal interpolation (OI) method so as to merge the tree ring proxy records with the simulations from CMIP5 (the fifth phase of the Climate Model Intercomparison Project). Both the uncertainties in proxy reconstruction and model simulations can be taken into account applying OI algorithm. For better preservation of physical coordinated features and spatial-temporal completeness of climate variability in 7 copies of model results, we perform the Empirical Orthogonal Functions (EOF) analysis to truncate the ensemble mean field as the first guess (background field) for OI. 681 temperature sensitive tree-ring chronologies are collected and screened from International Tree Ring Data Bank (ITRDB) and Past Global Changes (PAGES-2k) project. Firstly, two methods (variance matching and linear regression) are employed to calibrate the tree ring chronologies with instrumental data (CRUTEM4v) individually. In addition, we also remove the bias of both the background field and proxy records relative to instrumental dataset. Secondly, time-varying background error covariance matrix (B) and static "observation" error covariance matrix (R) are calculated for OI frame. In our scheme, matrix B was calculated locally, and "observation" error covariance are partially considered in R matrix (the covariance value between the pairs of tree ring sites that are very close to each other would be counted), which is different from the traditional assumption that R matrix should be diagonal. Comparing our results, it turns out that regional averaged series are not sensitive to the selection for calibration methods. The Quantile-Quantile plots indicate regional climatologies based on both methods are tend to be more agreeable with regional reconstruction of PAGES-2k in 20th century warming period than in little ice age (LIA). Lager volcanic cooling response over Asia

  4. The utility of covariance of combining ability in plant breeding.

    PubMed

    Arunachalam, V

    1976-11-01

    The definition of covariances of half- and full sibs, and hence that of variances of general and specific combining ability with regard to a quantitative character, is extended to take into account the respective covariances between a pair of characters. The interpretation of the dispersion and correlation matrices of general and specific combining ability is discussed by considering a set of single, three- and four-way crosses, made using diallel and line × tester mating systems in Pennisetum typhoides. The general implications of the concept of covariance of combining ability in plant breeding are discussed.

  5. Bayesian Analysis of Structural Equation Models with Nonlinear Covariates and Latent Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2006-01-01

    In this article, we formulate a nonlinear structural equation model (SEM) that can accommodate covariates in the measurement equation and nonlinear terms of covariates and exogenous latent variables in the structural equation. The covariates can come from continuous or discrete distributions. A Bayesian approach is developed to analyze the…

  6. To Covary or Not to Covary, That is the Question

    NASA Astrophysics Data System (ADS)

    Oehlert, A. M.; Swart, P. K.

    2016-12-01

    The meaning of covariation between the δ13C values of carbonate carbon and that of organic material is classically interpreted as reflecting original variations in the δ13C values of the dissolved inorganic carbon in the depositional environment. However, recently it has been shown by the examination of a core from Great Bahama Bank (Clino) that during exposure not only do the rocks become altered acquiring a negative δ13C value, but at the same time terrestrial vegetation adds organic carbon to the system masking the original marine values. These processes yield a strong positive covariation between δ13Corg and δ13Ccar values even though the signals are clearly not original and unrelated to the marine δ13C values. Examining the correlation between the organic and inorganic system in a stratigraphic sense at Clino and in a second more proximally located core (Unda) using a windowed correlation coefficient technique reveals that the correlation is even more complex. Changes in slope and the magnitude of the correlation are associated with exposure surfaces, facies changes, dolomitized bodies, and non-depositional surfaces. Finally other isotopic systems such as the δ13C value of specific organic compounds as well as δ15N values of bulk and individual compounds can provide additional information. In the case of δ15N values, decreases reflect a changes in the influence of terrestrial organic material and an increase contribution of organic material from the platform surface where the main source of nitrogen is derived from the activities of cyanobacteria.

  7. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    PubMed

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  8. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  9. On the Error State Selection for Stationary SINS Alignment and Calibration Kalman Filters—Part II: Observability/Estimability Analysis

    PubMed Central

    Silva, Felipe O.; Hemerly, Elder M.; Leite Filho, Waldemar C.

    2017-01-01

    This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions. PMID:28241494

  10. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  11. Competing risks models and time-dependent covariates

    PubMed Central

    Barnett, Adrian; Graves, Nick

    2008-01-01

    New statistical models for analysing survival data in an intensive care unit context have recently been developed. Two models that offer significant advantages over standard survival analyses are competing risks models and multistate models. Wolkewitz and colleagues used a competing risks model to examine survival times for nosocomial pneumonia and mortality. Their model was able to incorporate time-dependent covariates and so examine how risk factors that changed with time affected the chances of infection or death. We briefly explain how an alternative modelling technique (using logistic regression) can more fully exploit time-dependent covariates for this type of data. PMID:18423067

  12. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  13. Covariant deformed oscillator algebras

    NASA Technical Reports Server (NTRS)

    Quesne, Christiane

    1995-01-01

    The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.

  14. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  15. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  16. WAIS-IV subtest covariance structure: conceptual and statistical considerations.

    PubMed

    Ward, L Charles; Bergman, Maria A; Hebert, Katina R

    2012-06-01

    D. Wechsler (2008b) reported confirmatory factor analyses (CFAs) with standardization data (ages 16-69 years) for 10 core and 5 supplemental subtests from the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). Analyses of the 15 subtests supported 4 hypothesized oblique factors (Verbal Comprehension, Working Memory, Perceptual Reasoning, and Processing Speed) but also revealed unexplained covariance between Block Design and Visual Puzzles (Perceptual Reasoning subtests). That covariance was not included in the final models. Instead, a path was added from Working Memory to Figure Weights (Perceptual Reasoning subtest) to improve fit and achieve a desired factor pattern. The present research with the same data (N = 1,800) showed that the path from Working Memory to Figure Weights increases the association between Working Memory and Matrix Reasoning. Specifying both paths improves model fit and largely eliminates unexplained covariance between Block Design and Visual Puzzles but with the undesirable consequence that Figure Weights and Matrix Reasoning are equally determined by Perceptual Reasoning and Working Memory. An alternative 4-factor model was proposed that explained theory-implied covariance between Block Design and Visual Puzzles and between Arithmetic and Figure Weights while maintaining compatibility with WAIS-IV Index structure. The proposed model compared favorably with a 5-factor model based on Cattell-Horn-Carroll theory. The present findings emphasize that covariance model comparisons should involve considerations of conceptual coherence and theoretical adherence in addition to statistical fit. (c) 2012 APA, all rights reserved

  17. Explicitly covariant dispersion relations and self-induced transparency

    NASA Astrophysics Data System (ADS)

    Mahajan, S. M.; Asenjo, Felipe A.

    2017-02-01

    Explicitly covariant dispersion relations for a variety of plasma waves in unmagnetized and magnetized plasmas are derived in a systematic manner from a fully covariant plasma formulation. One needs to invoke relatively little known invariant combinations constructed from the ambient electromagnetic fields and the wave vector to accomplish the program. The implication of this work applied to the self-induced transparency effect is discussed. Some problems arising from the inconsistent use of relativity are pointed out.

  18. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  19. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A comparison of phenotypic variation and covariation patterns and the role of phylogeny, ecology, and ontogeny during cranial evolution of new world monkeys.

    PubMed

    Marroig, G; Cheverud, J M

    2001-12-01

    Similarity of genetic and phenotypic variation patterns among populations is important for making quantitative inferences about past evolutionary forces acting to differentiate populations and for evaluating the evolution of relationships among traits in response to new functional and developmental relationships. Here, phenotypic co variance and correlation structure is compared among Platyrrhine Neotropical primates. Comparisons range from among species within a genus to the superfamily level. Matrix correlation followed by Mantel's test and vector correlation among responses to random natural selection vectors (random skewers) were used to compare correlation and variance/covariance matrices of 39 skull traits. Sampling errors involved in matrix estimates were taken into account in comparisons using matrix repeatability to set upper limits for each pairwise comparison. Results indicate that covariance structure is not strictly constant but that the amount of variance pattern divergence observed among taxa is generally low and not associated with taxonomic distance. Specific instances of divergence are identified. There is no correlation between the amount of divergence in covariance patterns among the 16 genera and their phylogenetic distance derived from a conjoint analysis of four already published nuclear gene datasets. In contrast, there is a significant correlation between phylogenetic distance and morphological distance (Mahalanobis distance among genus centroids). This result indicates that while the phenotypic means were evolving during the last 30 millions years of New World monkey evolution, phenotypic covariance structures of Neotropical primate skulls have remained relatively consistent. Neotropical primates can be divided into four major groups based on their feeding habits (fruit-leaves, seed-fruits, insect-fruits, and gum-insect-fruits). Differences in phenotypic covariance structure are correlated with differences in feeding habits, indicating

  1. Revealing hidden covariation detection: evidence for implicit abstraction at study.

    PubMed

    Rossnagel, C S

    2001-09-01

    Four experiments in the brain scans paradigm (P. Lewicki, T. Hill, & I. Sasaki, 1989) investigated hidden covariation detection (HCD). In Experiment 1 HCD was found in an implicit- but not in an explicit-instruction group. In Experiment 2 HCD was impaired by nonholistic perception of stimuli but not by divided attention. In Experiment 3 HCD was eliminated by interspersing stimuli that deviated from the critical covariation. In Experiment 4 a transfer procedure was used. HCD was found with dissimilar test stimuli that preserved the covariation but was almost eliminated with similar stimuli that were neutral as to the covariation. Awareness was assessed both by objective and subjective tests in all experiments. Results suggest that HCD is an effect of implicit rule abstraction and that similarity processing plays only a minor role. HCD might be suppressed by intentional search strategies that induce inappropriate aggregation of stimulus information.

  2. Assimilation of surface NO2 and O3 observations into the SILAM chemistry transport model

    NASA Astrophysics Data System (ADS)

    Vira, J.; Sofiev, M.

    2015-02-01

    This paper describes the assimilation of trace gas observations into the chemistry transport model SILAM (System for Integrated modeLling of Atmospheric coMposition) using the 3D-Var method. Assimilation results for the year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the AirBase observation database, which provides the observational data set used in this study. Attention was paid to the background and observation error covariance matrices, which were obtained primarily by the iterative application of a posteriori diagnostics. The diagnostics were computed separately for 2 months representing summer and winter conditions, and further disaggregated by time of day. This enabled the derivation of background and observation error covariance definitions, which included both seasonal and diurnal variation. The consistency of the obtained covariance matrices was verified using χ2 diagnostics. The analysis scores were computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values was improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.

  3. Covariation Neglect among Novice Investors

    ERIC Educational Resources Information Center

    Hedesstrom, Ted Martin; Svedsater, Henrik; Garling, Tommy

    2006-01-01

    In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns…

  4. Structural Covariance of the Default Network in Healthy and Pathological Aging

    PubMed Central

    Turner, Gary R.

    2013-01-01

    Significant progress has been made uncovering functional brain networks, yet little is known about the corresponding structural covariance networks. The default network's functional architecture has been shown to change over the course of healthy and pathological aging. We examined cross-sectional and longitudinal datasets to reveal the structural covariance of the human default network across the adult lifespan and through the progression of Alzheimer's disease (AD). We used a novel approach to identify the structural covariance of the default network and derive individual participant scores that reflect the covariance pattern in each brain image. A seed-based multivariate analysis was conducted on structural images in the cross-sectional OASIS (N = 414) and longitudinal Alzheimer's Disease Neuroimaging Initiative (N = 434) datasets. We reproduced the distributed topology of the default network, based on a posterior cingulate cortex seed, consistent with prior reports of this intrinsic connectivity network. Structural covariance of the default network scores declined in healthy and pathological aging. Decline was greatest in the AD cohort and in those who progressed from mild cognitive impairment to AD. Structural covariance of the default network scores were positively associated with general cognitive status, reduced in APOEε4 carriers versus noncarriers, and associated with CSF biomarkers of AD. These findings identify the structural covariance of the default network and characterize changes to the network's gray matter integrity across the lifespan and through the progression of AD. The findings provide evidence for the large-scale network model of neurodegenerative disease, in which neurodegeneration spreads through intrinsically connected brain networks in a disease specific manner. PMID:24048852

  5. Covariance Applications in Criticality Safety, Light Water Reactor Analysis, and Spent Fuel Characterization

    DOE PAGES

    Williams, M. L.; Wiarda, D.; Ilas, G.; ...

    2014-06-15

    Recently, we processed a new covariance data library based on ENDF/B-VII.1 for the SCALE nuclear analysis code system. The multigroup covariance data are discussed here, along with testing and application results for critical benchmark experiments. Moreover, the cross section covariance library, along with covariances for fission product yields and decay data, is used to compute uncertainties in the decay heat produced by a burned reactor fuel assembly.

  6. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  7. Eddy Covariance Measurements of Methane Flux Using an Open-Path Gas Analyzer

    NASA Astrophysics Data System (ADS)

    Burba, G.; Anderson, T.; Zona, D.; Schedlbauer, J.; Anderson, D.; Eckles, R.; Hastings, S.; Ikawa, H.; McDermitt, D.; Oberbauer, S.; Oechel, W.; Riensche, B.; Starr, G.; Sturtevant, C.; Xu, L.

    2008-12-01

    Methane is an important greenhouse gas with a warming potential of about 23 times that of carbon dioxide over a 100-year cycle (Houghton et al., 2001). Measurements of methane fluxes from the terrestrial biosphere have mostly been made using flux chambers, which have many advantages, but are discrete in time and space and may disturb surface integrity and air pressure. Open-path analyzers offer a number of advantages for measuring methane fluxes, including undisturbed in- situ flux measurements, spatial integration using the Eddy Covariance approach, zero frequency response errors due to tube attenuation, confident water and thermal density terms from co-located fast measurements of water and sonic temperature, and remote deployment due to lower power demands in the absence of a pump. The prototype open-path methane analyzer is a VCSEL (vertical-cavity surface-emitting laser)-based instrument. It employs an open Herriott cell and measures levels of methane with RMS noise below 6 ppb at 10 Hz sampling in controlled laboratory environment. Field maintenance is minimized by a self-cleaning mechanism to keep the lower mirror free of contamination. Eddy Covariance measurements of methane flux using the prototype open-path methane analyzer are presented for the period between 2006 and 2008 in three ecosystems with contrasting weather and moisture conditions: (1) Fluxes over a short-hydroperiod sawgrass wetland in the Florida Everglades were measured in a warm and humid environment with temperatures often exceeding 25oC, variable winds, and frequent heavy dew at night; (2) Fluxes over coastal wetlands in an Arctic tundra were measured in an environment with frequent sub-zero temperatures, moderate winds, and ocean mist; (3) Fluxes over pacific mangroves in Mexico were measured in an environment with moderate air temperatures high winds, and sea spray. Presented eddy covariance flux data were collected from a co-located prototype open-path methane analyzer, LI-7500, and

  8. Spatially covariant theories of gravity: disformal transformation, cosmological perturbations and the Einstein frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, Tomohiro; Gao, Xian; Yokoyama, Jun'ichi, E-mail: tomofuji@stanford.edu, E-mail: gao@th.phys.titech.ac.jp, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp

    We investigate the cosmological background evolution and perturbations in a general class of spatially covariant theories of gravity, which propagates two tensor modes and one scalar mode. We show that the structure of the theory is preserved under the disformal transformation. We also evaluate the primordial spectra for both the gravitational waves and the curvature perturbation, which are invariant under the disformal transformation. Due to the existence of higher spatial derivatives, the quadratic Lagrangian for the tensor modes itself cannot be transformed to the form in the Einstein frame. Nevertheless, there exists a one-parameter family of frames in which themore » spectrum of the gravitational waves takes the standard form in the Einstein frame.« less

  9. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    PubMed

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  11. The evolution of phenotypic integration: How directional selection reshapes covariation in mice

    PubMed Central

    Penna, Anna; Melo, Diogo; Bernardi, Sandra; Oyarzabal, Maria Inés; Marroig, Gabriel

    2017-01-01

    Abstract Variation is the basis for evolution, and understanding how variation can evolve is a central question in biology. In complex phenotypes, covariation plays an even more important role, as genetic associations between traits can bias and alter evolutionary change. Covariation can be shaped by complex interactions between loci, and this genetic architecture can also change during evolution. In this article, we analyzed mouse lines experimentally selected for changes in size to address the question of how multivariate covariation changes under directional selection, as well as to identify the consequences of these changes to evolution. Selected lines showed a clear restructuring of covariation in their cranium and, instead of depleting their size variation, these lines increased their magnitude of integration and the proportion of variation associated with the direction of selection. This result is compatible with recent theoretical works on the evolution of covariation that take the complexities of genetic architecture into account. This result also contradicts the traditional view of the effects of selection on available covariation and suggests a much more complex view of how populations respond to selection. PMID:28685813

  12. Directional selection effects on patterns of phenotypic (co)variation in wild populations

    PubMed Central

    Patton, J. L.; Hubbe, A.; Marroig, G.

    2016-01-01

    Phenotypic (co)variation is a prerequisite for evolutionary change, and understanding how (co)variation evolves is of crucial importance to the biological sciences. Theoretical models predict that under directional selection, phenotypic (co)variation should evolve in step with the underlying adaptive landscape, increasing the degree of correlation among co-selected traits as well as the amount of genetic variance in the direction of selection. Whether either of these outcomes occurs in natural populations is an open question and thus an important gap in evolutionary theory. Here, we documented changes in the phenotypic (co)variation structure in two separate natural populations in each of two chipmunk species (Tamias alpinus and T. speciosus) undergoing directional selection. In populations where selection was strongest (those of T. alpinus), we observed changes, at least for one population, in phenotypic (co)variation that matched theoretical expectations, namely an increase of both phenotypic integration and (co)variance in the direction of selection and a re-alignment of the major axis of variation with the selection gradient. PMID:27881744

  13. Directional selection effects on patterns of phenotypic (co)variation in wild populations.

    PubMed

    Assis, A P A; Patton, J L; Hubbe, A; Marroig, G

    2016-11-30

    Phenotypic (co)variation is a prerequisite for evolutionary change, and understanding how (co)variation evolves is of crucial importance to the biological sciences. Theoretical models predict that under directional selection, phenotypic (co)variation should evolve in step with the underlying adaptive landscape, increasing the degree of correlation among co-selected traits as well as the amount of genetic variance in the direction of selection. Whether either of these outcomes occurs in natural populations is an open question and thus an important gap in evolutionary theory. Here, we documented changes in the phenotypic (co)variation structure in two separate natural populations in each of two chipmunk species (Tamias alpinus and T. speciosus) undergoing directional selection. In populations where selection was strongest (those of T. alpinus), we observed changes, at least for one population, in phenotypic (co)variation that matched theoretical expectations, namely an increase of both phenotypic integration and (co)variance in the direction of selection and a re-alignment of the major axis of variation with the selection gradient. © 2016 The Author(s).

  14. Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies

    ERIC Educational Resources Information Center

    Chen, Jianshen; Kaplan, David

    2015-01-01

    Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…

  15. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  16. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance

    PubMed Central

    Zheng, Binqi; Yuan, Xiaobing

    2018-01-01

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960

  17. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.

    PubMed

    Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing

    2018-03-07

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.

  18. Making connections: exploring the centrality of posttraumatic stress symptoms and covariates after a terrorist attack

    PubMed Central

    Birkeland, Marianne Skogbrott; Heir, Trond

    2017-01-01

    ABSTRACT Background: Posttraumatic stress symptoms are interconnected. Knowledge about which symptoms of posttraumatic stress are more strongly interconnected or central than others may have implications for the targeting of clinical interventions. Exploring whether symptoms of posttraumatic stress may be differentially related to covariates can contribute to our knowledge on how posttraumatic stress symptoms arise and are maintained. Objective: This study aimed to identify the most central symptoms of posttraumatic stress and their interconnections, and to explore how covariates such as exposure, sex, neuroticism, and social support are related to the network of symptoms of posttraumatic stress. Method: This study used survey data from ministerial employees collected approximately 10 months after the 2011 Oslo bombing that targeted the governmental quarters (n = 190). We conducted network analyses using Gaussian graphical models and the lasso regularization. Results: The network analysis revealed reliably strong connections between intrusive thoughts and nightmares, feeling easily startled and overly alert, and between feeling detached and emotionally numb. The most central symptom in the symptom network was feeling emotionally numb. The covariates were generally not found to have high centrality in the symptom network. An exception was that being female was connected to a high physiological reactivity to reminders of the trauma. Conclusions: Ten months after a workplace terror attack emotional numbness appears to be of high centrality in the symptom network of posttraumatic stress. Fear circuitry and dysphoric symptoms may constitute two functional entities in chronic posttraumatic stress. Clinical interventions targeting numbness may be beneficial in the treatment of posttraumatic stress, at least after workplace terrorism. PMID:29038689

  19. Adjusting head circumference for covariates in autism: clinical correlates of a highly heritable continuous trait

    PubMed Central

    Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J.; Murtha, Michael T.; Hus, Vanessa; Lowe, Jennifer K.; Willsey, A. Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W.; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E.; Ledbetter, David H.; Lord, Catherine; Mane, Shrikant M.; Martin, Christa Lese; Martin, Donna M.; Morrow, Eric M.; Walsh, Christopher A.; Sutcliffe, James S.; State, Matthew W.; Devlin, Bernie; Cook, Edwin H.; Kim, Soo-Jeong

    2013-01-01

    BACKGROUND Brain development follows a different trajectory in children with Autism Spectrum Disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. METHODS We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. RESULTS Gender, age, height, weight, genetic ancestry and ASD status were significant predictors of HC (estimate of the ASD effect=0.2cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. CONCLUSIONS Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait and population norms for HC would be far more accurate if covariates including genetic ancestry, height and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. PMID:23746936

  20. Phenotypic Covariation and Morphological Diversification in the Ruminant Skull.

    PubMed

    Haber, Annat

    2016-05-01

    Differences among clades in their diversification patterns result from a combination of extrinsic and intrinsic factors. In this study, I examined the role of intrinsic factors in the morphological diversification of ruminants, in general, and in the differences between bovids and cervids, in particular. Using skull morphology, which embodies many of the adaptations that distinguish bovids and cervids, I examined 132 of the 200 extant ruminant species. As a proxy for intrinsic constraints, I quantified different aspects of the phenotypic covariation structure within species and compared them with the among-species divergence patterns, using phylogenetic comparative methods. My results show that for most species, divergence is well aligned with their phenotypic covariance matrix and that those that are better aligned have diverged further away from their ancestor. Bovids have dispersed into a wider range of directions in morphospace than cervids, and their overall disparity is higher. This difference is best explained by the lower eccentricity of bovids' within-species covariance matrices. These results are consistent with the role of intrinsic constraints in determining amount, range, and direction of dispersion and demonstrate that intrinsic constraints can influence macroevolutionary patterns even as the covariance structure evolves.

  1. Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data

    PubMed Central

    Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

    2014-01-01

    In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438

  2. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  3. Handling Correlations between Covariates and Random Slopes in Multilevel Models

    ERIC Educational Resources Information Center

    Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders

    2014-01-01

    This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…

  4. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    NASA Astrophysics Data System (ADS)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  5. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  6. Comparing nocturnal eddy covariance measurements to estimates of ecosystem respiration made by scaling chamber measurements at six coniferous boreal sites

    USGS Publications Warehouse

    Lavigne, M.B.; Ryan, M.G.; Anderson, D.E.; Baldocchi, D.D.; Crill, P.M.; Fitzjarrald, D.R.; Goulden, M.L.; Gower, S.T.; Massheder, J.M.; McCaughey, J.H.; Rayment, M.; Striegl, Robert G.

    1997-01-01

    During the growing season, nighttime ecosystem respiration emits 30–100% of the daytime net photosynthetic uptake of carbon, and therefore measurements of rates and understanding of its control by the environment are important for understanding net ecosystem exchange. Ecosystem respiration can be measured at night by eddy covariance methods, but the data may not be reliable because of low turbulence or other methodological problems. We used relationships between woody tissue, foliage, and soil respiration rates and temperature, with temperature records collected on site to estimate ecosystem respiration rates at six coniferous BOREAS sites at half-hour or 1-hour intervals, and then compared these estimates to nocturnal measurements of CO2 exchange by eddy covariance. Soil surface respiration was the largest source of CO2 at all sites (48–71%), and foliar respiration made a large contribution to ecosystem respiration at all sites (25–43%). Woody tissue respiration contributed only 5–15% to ecosystem respiration. We estimated error for the scaled chamber predictions of ecosystem respiration by using the uncertainty associated with each respiration parameter and respiring biomass value. There was substantial uncertainty in estimates of foliar and soil respiration because of the spatial variability of specific respiration rates. In addition, more attention needs to be paid to estimating foliar respiration during the early part of the growing season, when new foliage is growing, and to determining seasonal trends of soil surface respiration. Nocturnal eddy covariance measurements were poorly correlated to scaled chamber estimates of ecosystem respiration (r2=0.06–0.27) and were consistently lower than scaled chamber predictions (by 27% on average for the six sites). The bias in eddy covariance estimates of ecosystem respiration will alter estimates of gross assimilation in the light and of net ecosystem exchange rates over extended periods.

  7. Motor-Based Treatment with and without Ultrasound Feedback for Residual Speech-Sound Errors

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2017-01-01

    Background: There is a need to develop effective interventions and to compare the efficacy of different interventions for children with residual speech-sound errors (RSSEs). Rhotics (the r-family of sounds) are frequently in error American English-speaking children with RSSEs and are commonly targeted in treatment. One treatment approach involves…

  8. Structural Covariance Networks in Children with Autism or ADHD.

    PubMed

    Bethlehem, R A I; Romero-Garcia, R; Mak, E; Bullmore, E T; Baron-Cohen, S

    2017-08-01

    While autism and attention-deficit/hyperactivity disorder (ADHD) are considered distinct conditions from a diagnostic perspective, clinically they share some phenotypic features and have high comorbidity. Regardless, most studies have focused on only one condition, with considerable heterogeneity in their results. Taking a dual-condition approach might help elucidate shared and distinct neural characteristics. Graph theory was used to analyse topological properties of structural covariance networks across both conditions and relative to a neurotypical (NT; n = 87) group using data from the ABIDE (autism; n = 62) and ADHD-200 datasets (ADHD; n = 69). Regional cortical thickness was used to construct the structural covariance networks. This was analysed in a theoretical framework examining potential differences in long and short-range connectivity, with a specific focus on relation between central graph measures and cortical thickness. We found convergence between autism and ADHD, where both conditions show an overall decrease in CT covariance with increased Euclidean distance between centroids compared with a NT population. The 2 conditions also show divergence. Namely, there is less modular overlap between the 2 conditions than there is between each condition and the NT group. The ADHD group also showed reduced cortical thickness and lower degree in hub regions than the autism group. Lastly, the ADHD group also showed reduced wiring costs compared with the autism groups. Our results indicate a need for taking an integrated approach when considering highly comorbid conditions such as autism and ADHD. Furthermore, autism and ADHD both showed alterations in the relation between inter-regional covariance and centroid distance, where both groups show a steeper decline in covariance as a function of distance. The 2 groups also diverge on modular organization, cortical thickness of hub regions and wiring cost of the covariance network. Thus, on some network features the

  9. Genome-Wide Networks of Amino Acid Covariances Are Common among Viruses

    PubMed Central

    Donlin, Maureen J.; Szeto, Brandon; Gohara, David W.; Aurora, Rajeev

    2012-01-01

    Coordinated variation among positions in amino acid sequence alignments can reveal genetic dependencies at noncontiguous positions, but methods to assess these interactions are incompletely developed. Previously, we found genome-wide networks of covarying residue positions in the hepatitis C virus genome (R. Aurora, M. J. Donlin, N. A. Cannon, and J. E. Tavis, J. Clin. Invest. 119:225–236, 2009). Here, we asked whether such networks are present in a diverse set of viruses and, if so, what they may imply about viral biology. Viral sequences were obtained for 16 viruses in 13 species from 9 families. The entire viral coding potential for each virus was aligned, all possible amino acid covariances were identified using the observed-minus-expected-squared algorithm at a false-discovery rate of ≤1%, and networks of covariances were assessed using standard methods. Covariances that spanned the viral coding potential were common in all viruses. In all cases, the covariances formed a single network that contained essentially all of the covariances. The hepatitis C virus networks had hub-and-spoke topologies, but all other networks had random topologies with an unusually large number of highly connected nodes. These results indicate that genome-wide networks of genetic associations and the coordinated evolution they imply are very common in viral genomes, that the networks rarely have the hub-and-spoke topology that dominates other biological networks, and that network topologies can vary substantially even within a given viral group. Five examples with hepatitis B virus and poliovirus are presented to illustrate how covariance network analysis can lead to inferences about viral biology. PMID:22238298

  10. Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Lee, Taehun; Cai, Li

    2012-01-01

    Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…

  11. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less

  12. The Impact of Ocean Data Assimilation on Seasonal-to-Interannual Forecasts: A Case Study of the 2006 El Nino Event

    NASA Technical Reports Server (NTRS)

    Yang, Shu-Chih; Rienecker, Michele; Keppenne, Christian

    2010-01-01

    This study investigates the impact of four different ocean analyses on coupled forecasts of the 2006 El Nino event. Forecasts initialized in June 2006 using ocean analyses from an assimilation that uses flow-dependent background error covariances are compared with those using static error covariances that are not flow dependent. The flow-dependent error covariances reflect the error structures related to the background ENSO instability and are generated by the coupled breeding method. The ocean analyses used in this study result from the assimilation of temperature and salinity, with the salinity data available from Argo floats. Of the analyses, the one using information from the coupled bred vectors (BV) replicates the observed equatorial long wave propagation best and exhibits more warming features leading to the 2006 El Nino event. The forecasts initialized from the BV-based analysis agree best with the observations in terms of the growth of the warm anomaly through two warming phases. This better performance is related to the impact of the salinity analysis on the state evolution in the equatorial thermocline. The early warming is traced back to salinity differences in the upper ocean of the equatorial central Pacific, while the second warming, corresponding to the mature phase, is associated with the effect of the salinity assimilation on the depth of the thermocline in the western equatorial Pacific. The series of forecast experiments conducted here show that the structure of the salinity in the initial conditions is important to the forecasts of the extension of the warm pool and the evolution of the 2006 El Ni o event.

  13. Analysis of capture-recapture models with individual covariates using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew

    2009-01-01

    I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.

  14. A zero-augmented generalized gamma regression calibration to adjust for covariate measurement error: A case of an episodically consumed dietary intake

    PubMed Central

    Agogo, George O.

    2017-01-01

    Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599

  15. A Semiparametric Approach to Simultaneous Covariance Estimation for Bivariate Sparse Longitudinal Data

    PubMed Central

    Das, Kiranmoy; Daniels, Michael J.

    2014-01-01

    Summary Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium and low) at baseline. PMID:24400941

  16. Covariant Structure of Models of Geophysical Fluid Motion

    NASA Astrophysics Data System (ADS)

    Dubos, Thomas

    2018-01-01

    Geophysical models approximate classical fluid motion in rotating frames. Even accurate approximations can have profound consequences, such as the loss of inertial frames. If geophysical fluid dynamics are not strictly equivalent to Newtonian hydrodynamics observed in a rotating frame, what kind of dynamics are they? We aim to clarify fundamental similarities and differences between relativistic, Newtonian, and geophysical hydrodynamics, using variational and covariant formulations as tools to shed the necessary light. A space-time variational principle for the motion of a perfect fluid is introduced. The geophysical action is interpreted as a synchronous limit of the relativistic action. The relativistic Levi-Civita connection also has a finite synchronous limit, which provides a connection with which to endow geophysical space-time, generalizing Cartan (1923). A covariant mass-momentum budget is obtained using covariance of the action and metric-preserving properties of the connection. Ultimately, geophysical models are found to differ from the standard compressible Euler model only by a specific choice of a metric-Coriolis-geopotential tensor akin to the relativistic space-time metric. Once this choice is made, the same covariant mass-momentum budget applies to Newtonian and all geophysical hydrodynamics, including those models lacking an inertial frame. Hence, it is argued that this mass-momentum budget provides an appropriate, common fundamental principle of dynamics. The postulate that Euclidean, inertial frames exist can then be regarded as part of the Newtonian theory of gravitation, which some models of geophysical hydrodynamics slightly violate.

  17. Error-Related Brain Activity in Young Children: Associations with Parental Anxiety and Child Temperamental Negative Emotionality

    ERIC Educational Resources Information Center

    Torpey, Dana C.; Hajcak, Greg; Kim, Jiyon; Kujawa, Autumn J.; Dyson, Margaret W.; Olino, Thomas M.; Klein, Daniel N.

    2013-01-01

    Background: There is increasing interest in error-related brain activity in anxiety disorders. The error-related negativity (ERN) is a negative deflection in the event-related potential approximately 50 [milliseconds] after errors compared to correct responses. Recent studies suggest that the ERN may be a biomarker for anxiety, as it is positively…

  18. Relativistic covariance of Ohm's law

    NASA Astrophysics Data System (ADS)

    Starke, R.; Schober, G. A. H.

    2016-04-01

    The derivation of Lorentz-covariant generalizations of Ohm's law has been a long-term issue in theoretical physics with deep implications for the study of relativistic effects in optical and atomic physics. In this article, we propose an alternative route to this problem, which is motivated by the tremendous progress in first-principles materials physics in general and ab initio electronic structure theory in particular. We start from the most general, Lorentz-covariant first-order response law, which is written in terms of the fundamental response tensor χμ ν relating induced four-currents to external four-potentials. By showing the equivalence of this description to Ohm's law, we prove the validity of Ohm's law in every inertial frame. We further use the universal relation between χμ ν and the microscopic conductivity tensor σkℓ to derive a fully relativistic transformation law for the latter, which includes all effects of anisotropy and relativistic retardation. In the special case of a constant, scalar conductivity, this transformation law can be used to rederive a standard textbook generalization of Ohm's law.

  19. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  20. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.