Sample records for asymptotic standard errors

  1. Automated Hypothesis Tests and Standard Errors for Nonstandard Problems with Description of Computer Package: A Draft.

    ERIC Educational Resources Information Center

    Lord, Frederic M.; Stocking, Martha

    A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…

  2. Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items

    ERIC Educational Resources Information Center

    Cher Wong, Cheow

    2015-01-01

    Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…

  3. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  4. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  5. Quantifying Adventitious Error in a Covariance Structure as a Random Effect

    PubMed Central

    Wu, Hao; Browne, Michael W.

    2017-01-01

    We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463

  6. Comparison of Optimal Design Methods in Inverse Problems

    DTIC Science & Technology

    2011-05-11

    corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each

  7. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  8. Estimating the Imputed Social Cost of Errors of Measurement.

    DTIC Science & Technology

    1983-10-01

    social cost of an error of measurement in the score on a unidimensional test, an asymptotic method, based on item response theory, is developed for...11111111 ij MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A.5. ,,, I v.P I RR-83-33-ONR 4ESTIMATING THE IMPUTED SOCIAL COST S OF... SOCIAL COST OF ERRORS OF MEASUREMENT Frederic M. Lord This research was sponsored in part by the Personnel and Training Research Programs Psychological

  9. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  10. Accommodating Sensor Bias in MRAC for State Tracking

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    The problem of accommodating unknown sensor bias is considered in a direct model reference adaptive control (MRAC) setting for state tracking using state feedback. Sensor faults can occur during operation, and if the biased state measurements are directly used with a standard MRAC control law, neither closed-loop signal boundedness, nor asymptotic tracking can be guaranteed and the resulting tracking errors may be unbounded or unacceptably large. A modified MRAC law is proposed, which combines a bias estimator with control gain adaptation, and it is shown that signal boundedness can be accomplished, although the tracking error may not go to zero. Further, for the case wherein an asymptotically stable sensor bias estimator is available, an MRAC control law is proposed to accomplish asymptotic tracking and signal boundedness. Such a sensor bias estimator can be designed if additional sensor measurements are available, as illustrated for the case wherein bias is present in the rate gyro and airspeed measurements. Numerical example results are presented to illustrate each of the schemes.

  11. Kappa statistic for the clustered dichotomous responses from physicians and patients

    PubMed Central

    Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L.; Cai, Jianwen

    2013-01-01

    The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared to the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. An example of an application to a coronary heart disease prevention study is presented. PMID:23533082

  12. Kappa statistic for clustered dichotomous responses from physicians and patients.

    PubMed

    Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L; Cai, Jianwen

    2013-09-20

    The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared with the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. We present an example of an application to a coronary heart disease prevention study. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  14. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  15. Composite Linear Models | Division of Cancer Prevention

    Cancer.gov

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  16. Performance of Modified Test Statistics in Covariance and Correlation Structure Analysis under Conditions of Multivariate Nonnormality.

    ERIC Educational Resources Information Center

    Fouladi, Rachel T.

    2000-01-01

    Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…

  17. On the Latent Regression Model of Item Response Theory. Research Report. ETS RR-07-12

    ERIC Educational Resources Information Center

    Antal, Tamás

    2007-01-01

    Full account of the latent regression model for the National Assessment of Educational Progress is given. The treatment includes derivation of the EM algorithm, Newton-Raphson method, and the asymptotic standard errors. The paper also features the use of the adaptive Gauss-Hermite numerical integration method as a basic tool to evaluate…

  18. Asymptotic Standard Errors of Observed-Score Equating with Polytomous IRT Models

    ERIC Educational Resources Information Center

    Andersson, Björn

    2016-01-01

    In observed-score equipercentile equating, the goal is to make scores on two scales or tests measuring the same construct comparable by matching the percentiles of the respective score distributions. If the tests consist of different items with multiple categories for each item, a suitable model for the responses is a polytomous item response…

  19. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  20. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  1. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  2. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  3. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  4. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  5. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  6. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  7. Perceptions and Efficacy of Flight Operational Quality Assurance (FOQA) Programs Among Small-scale Operators

    DTIC Science & Technology

    2012-01-01

    regressive Integrated Moving Average ( ARIMA ) model for the data, eliminating the need to identify an appropriate model through trial and error alone...06 .11 13.67 16 .62 16 .14 .11 8.06 16 .95 * Based on the asymptotic chi-square approximation. 8 In general, ARIMA models address three...performance standards and measurement processes and a prevailing climate of organizational trust were important factors. Unfortunately, uneven

  8. Error of semiclassical eigenvalues in the semiclassical limit - an asymptotic analysis of the Sinai billiard

    NASA Astrophysics Data System (ADS)

    Dahlqvist, Per

    1999-10-01

    We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.

  9. Robust ridge regression estimators for nonlinear models with applications to high throughput screening assay data.

    PubMed

    Lim, Changwon

    2015-03-30

    Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less

  11. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  12. Estimating and comparing microbial diversity in the presence of sequencing errors

    PubMed Central

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872

  13. Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity

    NASA Technical Reports Server (NTRS)

    Lin, J. Y.; Mingori, D. L.

    1992-01-01

    We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.

  14. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    PubMed

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Asymptotic Behavior of the Stock Price Distribution Density and Implied Volatility in Stochastic Volatility Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulisashvili, Archil, E-mail: guli@math.ohiou.ed; Stein, Elias M., E-mail: stein@math.princeton.ed

    2010-06-15

    We study the asymptotic behavior of distribution densities arising in stock price models with stochastic volatility. The main objects of our interest in the present paper are the density of time averages of the squared volatility process and the density of the stock price process in the Stein-Stein and the Heston model. We find explicit formulas for leading terms in asymptotic expansions of these densities and give error estimates. As an application of our results, sharp asymptotic formulas for the implied volatility in the Stein-Stein and the Heston model are obtained.

  16. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  17. Learning to classify in large committee machines

    NASA Astrophysics Data System (ADS)

    O'kane, Dominic; Winther, Ole

    1994-10-01

    The ability of a two-layer neural network to learn a specific non-linearly-separable classification task, the proximity problem, is investigated using a statistical mechanics approach. Both the tree and fully connected architectures are investigated in the limit where the number K of hidden units is large, but still much smaller than the number N of inputs. Both have continuous weights. Within the replica symmetric ansatz, we find that for zero temperature training, the tree architecture exhibits a strong overtraining effect. For nonzero temperature the asymptotic error is lowered, but it is still higher than the corresponding value for the simple perceptron. The fully connected architecture is considered for two regimes. First, for a finite number of examples we find a symmetry among the hidden units as each performs equally well. The asymptotic generalization error is finite, and minimal for T-->∞ where it goes to the same value as for the simple perceptron. For a large number of examples we find a continuous transition to a phase with broken hidden-unit symmetry, which has an asymptotic generalization error equal to zero.

  18. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  19. ASYMPTOTIC DISTRIBUTION OF ΔAUC, NRIs, AND IDI BASED ON THEORY OF U-STATISTICS

    PubMed Central

    Demler, Olga V.; Pencina, Michael J.; Cook, Nancy R.; D’Agostino, Ralph B.

    2017-01-01

    The change in AUC (ΔAUC), the IDI, and NRI are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues we unite the ΔAUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ΔAUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ΔAUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ΔAUC, NRIs, or IDI. In the former case SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ΔAUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ΔAUC. PMID:28627112

  20. Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.

    PubMed

    Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B

    2017-09-20

    The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Linear discriminant analysis with misallocation in training samples

    NASA Technical Reports Server (NTRS)

    Chhikara, R. (Principal Investigator); Mckeon, J.

    1982-01-01

    Linear discriminant analysis for a two-class case is studied in the presence of misallocation in training samples. A general appraoch to modeling of mislocation is formulated, and the mean vectors and covariance matrices of the mixture distributions are derived. The asymptotic distribution of the discriminant boundary is obtained and the asymptotic first two moments of the two types of error rate given. Certain numerical results for the error rates are presented by considering the random and two non-random misallocation models. It is shown that when the allocation procedure for training samples is objectively formulated, the effect of misallocation on the error rates of the Bayes linear discriminant rule can almost be eliminated. If, however, this is not possible, the use of Fisher rule may be preferred over the Bayes rule.

  2. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  3. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    NASA Astrophysics Data System (ADS)

    Audenaert, Koenraad M. R.; Mosonyi, Milán

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j

  4. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  5. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  6. Nonlinear observers with linearizable error dynamics

    NASA Technical Reports Server (NTRS)

    Krener, A. J.; Respondek, W.

    1985-01-01

    A new method for designing asymptotic observers for a class of nonlinear systems is presented. The error between the state of the systems and the state of the observer in appropriate coordinates evolves linearly and can be made to decay aribtrarily exponentially fast.

  7. Better band gaps for wide-gap semiconductors from a locally corrected exchange-correlation potential that nearly eliminates self-interaction errors

    DOE PAGES

    Singh, Prashant; Harbola, Manoj K.; Johnson, Duane D.

    2017-09-08

    Here, this work constitutes a comprehensive and improved account of electronic-structure and mechanical properties of silicon-nitride (more » $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ ) polymorphs via van Leeuwen and Baerends (LB) exchange-corrected local density approximation (LDA) that enforces the exact exchange potential asymptotic behavior. The calculated lattice constant, bulk modulus, and electronic band structure of $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ polymorphs are in good agreement with experimental results. We also show that, for a single electron in a hydrogen atom, spherical well, or harmonic oscillator, the LB-corrected LDA reduces the (self-interaction) error to exact total energy to ~10%, a factor of three to four lower than standard LDA, due to a dramatically improved representation of the exchange-potential.« less

  8. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  9. Asymptotically safe standard model extensions?

    NASA Astrophysics Data System (ADS)

    Pelaggi, Giulio Maria; Plascencia, Alexis D.; Salvio, Alberto; Sannino, Francesco; Smirnov, Juri; Strumia, Alessandro

    2018-05-01

    We consider theories with a large number NF of charged fermions and compute the renormalization group equations for the gauge, Yukawa and quartic couplings resummed at leading order in 1 /NF. We construct extensions of the standard model where SU(2) and/or SU(3) are asymptotically safe. When the same procedure is applied to the Abelian U(1) factor, we find that the Higgs quartic can not be made asymptotically safe and stay perturbative at the same time.

  10. Asymptotically Safe Standard Model via Vectorlike Fermions.

    PubMed

    Mann, R B; Meffe, J R; Sannino, F; Steele, T G; Wang, Z W; Zhang, C

    2017-12-29

    We construct asymptotically safe extensions of the standard model by adding gauged vectorlike fermions. Using large number-of-flavor techniques we argue that all gauge couplings, including the hypercharge and, under certain conditions, the Higgs coupling, can achieve an interacting ultraviolet fixed point.

  11. Asymptotically Safe Standard Model via Vectorlike Fermions

    NASA Astrophysics Data System (ADS)

    Mann, R. B.; Meffe, J. R.; Sannino, F.; Steele, T. G.; Wang, Z. W.; Zhang, C.

    2017-12-01

    We construct asymptotically safe extensions of the standard model by adding gauged vectorlike fermions. Using large number-of-flavor techniques we argue that all gauge couplings, including the hypercharge and, under certain conditions, the Higgs coupling, can achieve an interacting ultraviolet fixed point.

  12. Use of asymptotic analysis of the large activation-energy limit to compare graphical methods of treating thermogravimetry data

    Treesearch

    A. Broido; F.A. Williams

    1973-01-01

    An earIier numerical analysis showed that the second approximate method of Horotitz and Metzger can be rendered exceedingly accurate for reduction of thermo-gravimetry data. It is demonstrated here that this result can be justified on the basis of an asymptotic expansion with a nondimensional activation energy as the large parameter. The order of magnitude of the error...

  13. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    PubMed

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Simultaneous Inference For The Mean Function Based on Dense Functional Data

    PubMed Central

    Cao, Guanqun; Yang, Lijian; Todem, David

    2012-01-01

    A polynomial spline estimator is proposed for the mean function of dense functional data together with a simultaneous confidence band which is asymptotically correct. In addition, the spline estimator and its accompanying confidence band enjoy oracle efficiency in the sense that they are asymptotically the same as if all random trajectories are observed entirely and without errors. The confidence band is also extended to the difference of mean functions of two populations of functional data. Simulation experiments provide strong evidence that corroborates the asymptotic theory while computing is efficient. The confidence band procedure is illustrated by analyzing the near infrared spectroscopy data. PMID:22665964

  15. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    PubMed

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Application of matched asymptotic expansions to lunar and interplanetary trajectories. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Lancaster, J. E.

    1973-01-01

    Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical solution to the problem on N-bodies. The earlier first-order solutions, derived by the method of matched asymptotic expansions, have been extended to second order for the purpose of obtaining increased accuracy. The derivation of the second-order solution is summarized by showing the essential steps, some in functional form. The general asymptotic solution has been used as a basis for formulating a number of analytical two-point boundary value solutions. These include earth-to-moon, one- and two-impulse moon-to-earth, and interplanetary solutions. The results show that the accuracies of the asymptotic solutions range from an order of magnitude better than conic approximations to that of numerical integration itself. Also, since no iterations are required, the asymptotic boundary value solutions are obtained in a fraction of the time required for comparable numerically integrated solutions. The subject of minimizing the second-order error is discussed, and recommendations made for further work directed toward achieving a uniform accuracy in all applications.

  17. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  18. Asymptotic sideslip angle and yaw rate decoupling control in four-wheel steering vehicles

    NASA Astrophysics Data System (ADS)

    Marino, Riccardo; Scalzi, Stefano

    2010-09-01

    This paper shows that, for a four-wheel steering vehicle, a proportional-integral (PI) active front steering control and a PI active rear steering control from the yaw rate error together with an additive feedforward reference signal for the vehicle sideslip angle can asymptotically decouple the lateral velocity and the yaw rate dynamics; that is the control can set arbitrary steady state values for lateral speed and yaw rate at any longitudinal speed. Moreover, the PI controls can suppress oscillatory behaviours by assigning real stable eigenvalues to a widely used linearised model of the vehicle steering dynamics for any value of longitudinal speed in understeering vehicles. In particular, the four PI control parameters are explicitly expressed in terms of the three real eigenvalues to be assigned. No lateral acceleration and no lateral speed measurements are required. The controlled system maintains the well-known advantages of both front and rear active steering controls: higher controllability, enlarged bandwidth for the yaw rate dynamics, suppressed resonances, new stable cornering manoeuvres and improved manoeuvrability. In particular, zero lateral speed may be asymptotically achieved while controlling the yaw rate: in this case comfort is improved since the phase lag between lateral acceleration and yaw rate is reduced. Also zero yaw rate can be asymptotically achieved: in this case additional stable manoeuvres are obtained in obstacle avoidance. Several simulations, including step references and moose tests, are carried out on a standard small SUV CarSim model to explore the robustness with respect to unmodelled effects such as combined lateral and longitudinal tyre forces, pitch, roll and driver dynamics. The simulations confirm the decoupling between the lateral velocity and the yaw rate and show the advantages obtained by the proposed control: reduced lateral speed or reduced yaw rate, suppressed oscillations and new stable manoeuvres.

  19. A Reduced Dimension Static, Linearized Kalman Filter and Smoother

    NASA Technical Reports Server (NTRS)

    Fukumori, I.

    1995-01-01

    An approximate Kalman filter and smoother, based on approximations of the state estimation error covariance matrix, is described. Approximations include a reduction of the effective state dimension, use of a static asymptotic error limit, and a time-invariant linearization of the dynamic model for error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. Examples of use come from TOPEX/POSEIDON.

  20. Frequency encoded auditory display of the critical tracking task

    NASA Technical Reports Server (NTRS)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  1. Asymptotic boundary conditions for dissipative waves: General theory

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    An outstanding issue in the computational analysis of time dependent problems is the imposition of appropriate radiation boundary conditions at artificial boundaries. Accurate conditions are developed which are based on the asymptotic analysis of wave propagation over long ranges. Employing the method of steepest descents, dominant wave groups are identified and simple approximations to the dispersion relation are considered in order to derive local boundary operators. The existence of a small number of dominant wave groups may be expected for systems with dissipation. Estimates of the error as a function of domain size are derived under general hypotheses, leading to convergence results. Some practical aspects of the numerical construction of the asymptotic boundary operators are also discussed.

  2. Comment on ''The velocity field due to an oscillating plate in an Oldroyd-B fluid'' by C.C. Hopkins and J.R. de Bruyn [Can. J. Phys. 92, 533 (2014)

    DOE PAGES

    Christov, Ivan C.

    2015-09-11

    We correct certain errors and ambiguities in the recent pedagogical article by Hopkins and de Bruyn. The early-time asymptotics of the solution to the transient version of Stokes’ second problem for an Oldroyd-B fluid in a half-space is presented, as Appendix A, to complement the late-time asymptotics given by Hopkins and de Bruyn.

  3. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  4. Modeling and Control of a Tailsitter with a Ducted Fan

    NASA Astrophysics Data System (ADS)

    Argyle, Matthew Elliott

    There are two traditional aircraft categories: fixed-wing which have a long endurance and a high cruise airspeed and rotorcraft which can take-off and land vertically. The tailsitter is a type of aircraft that has the strengths of both platforms, with no additional mechanical complexity, because it takes off and lands vertically on its tail and can transition the entire aircraft horizontally into high-speed flight. In this dissertation, we develop the entire control system for a tailsitter with a ducted fan. The standard method to compute the quaternion-based attitude error does not generate ideal trajectories for a hovering tailsitter for some situations. In addition, the only approach in the literature to mitigate this breaks down for large attitude errors. We develop an alternative quaternion-based error method which generates better trajectories than the standard approach and can handle large errors. We also derive a hybrid backstepping controller with almost global asymptotic stability based on this error method. Many common altitude and airspeed control schemes for a fixed-wing airplane assume that the altitude and airspeed dynamics are decoupled which leads to errors. The Total Energy Control System (TECS) is an approach that controls the altitude and airspeed by manipulating the total energy rate and energy distribution rate, of the aircraft, in a manner which accounts for the dynamic coupling. In this dissertation, a nonlinear controller, which can handle inaccurate thrust and drag models, based on the TECS principles is derived. Simulation results show that the nonlinear controller has better performance than the standard PI TECS control schemes. Most constant altitude transitions are accomplished by generating an optimal trajectory, and potentially actuator inputs, based on a high fidelity model of the aircraft. While there are several approaches to mitigate the effects of modeling errors, these do not fully remove the accurate model requirement. In this dissertation, we develop two different approaches that can achieve near constant altitude transitions for some types of aircraft. The first method, based on multiple LQR controllers, requires a high fidelity model of the aircraft. However, the second method, based on the energy along the body axes, requires almost no aerodynamic information.

  5. Distributed adaptive asymptotically consensus tracking control of uncertain Euler-Lagrange systems under directed graph condition.

    PubMed

    Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin

    2017-11-01

    In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A New Test of Linear Hypotheses in OLS Regression under Heteroscedasticity of Unknown Form

    ERIC Educational Resources Information Center

    Cai, Li; Hayes, Andrew F.

    2008-01-01

    When the errors in an ordinary least squares (OLS) regression model are heteroscedastic, hypothesis tests involving the regression coefficients can have Type I error rates that are far from the nominal significance level. Asymptotically, this problem can be rectified with the use of a heteroscedasticity-consistent covariance matrix (HCCM)…

  7. Top mass from asymptotic safety

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Held, Aaron

    2018-02-01

    We discover that asymptotically safe quantum gravity could predict the top-quark mass. For a broad range of microscopic gravitational couplings, quantum gravity could provide an ultraviolet completion for the Standard Model by triggering asymptotic freedom in the gauge couplings and bottom Yukawa and asymptotic safety in the top-Yukawa and Higgs-quartic coupling. We find that in a part of this range, a difference of the top and bottom mass of approximately 170GeV is generated and the Higgs mass is determined in terms of the top mass. Assuming no new physics below the Planck scale, we construct explicit Renormalization Group trajectories for Standard Model and gravitational couplings which link the transplanckian regime to the electroweak scale and yield a top pole mass of Mt,pole ≈ 171GeV.

  8. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  9. Recursive Construction of Noiseless Subsystem for Qudits

    NASA Astrophysics Data System (ADS)

    Güngördü, Utkan; Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing

    2014-03-01

    When the environmental noise acting on the system has certain symmetries, a subsystem of the total system can avoid errors. Encoding information into such a subsystem is advantageous since it does not require any error syndrome measurements, which may introduce further errors to the system. However, utilizing such a subsystem for large systems gets impractical with the increasing number of qudits. A recursive scheme offers a solution to this problem. Here, we review the recursive construct introduced in, which can asymptotically protect 1/d of the qudits in system against collective errors.

  10. A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.

    PubMed

    Consiglio, J D; Shan, G; Wilding, G E

    2014-01-01

    Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.

  11. Asymptotic state discrimination and a strict hierarchy in distinguishability norms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitambar, Eric; Hsieh, Min-Hsiu

    2014-11-15

    In this paper, we consider the problem of discriminating quantum states by local operations and classical communication (LOCC) when an arbitrarily small amount of error is permitted. This paradigm is known as asymptotic state discrimination, and we derive necessary conditions for when two multipartite states of any size can be discriminated perfectly by asymptotic LOCC. We use this new criterion to prove a gap in the LOCC and separable distinguishability norms. We then turn to the operational advantage of using two-way classical communication over one-way communication in LOCC processing. With a simple two-qubit product state ensemble, we demonstrate a strictmore » majorization of the two-way LOCC norm over the one-way norm.« less

  12. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Hazard Function Estimation with Cause-of-Death Data Missing at Random.

    PubMed

    Wang, Qihua; Dinse, Gregg E; Liu, Chunling

    2012-04-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.

  14. Coarse-grained stochastic processes and kinetic Monte Carlo simulators for the diffusion of interacting particles

    NASA Astrophysics Data System (ADS)

    Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2003-11-01

    We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Prashant; Harbola, Manoj K.; Johnson, Duane D.

    Here, this work constitutes a comprehensive and improved account of electronic-structure and mechanical properties of silicon-nitride (more » $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ ) polymorphs via van Leeuwen and Baerends (LB) exchange-corrected local density approximation (LDA) that enforces the exact exchange potential asymptotic behavior. The calculated lattice constant, bulk modulus, and electronic band structure of $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ polymorphs are in good agreement with experimental results. We also show that, for a single electron in a hydrogen atom, spherical well, or harmonic oscillator, the LB-corrected LDA reduces the (self-interaction) error to exact total energy to ~10%, a factor of three to four lower than standard LDA, due to a dramatically improved representation of the exchange-potential.« less

  16. Testing in semiparametric models with interaction, with applications to gene-environment interactions.

    PubMed

    Maity, Arnab; Carroll, Raymond J; Mammen, Enno; Chatterjee, Nilanjan

    2009-01-01

    Motivated from the problem of testing for genetic effects on complex traits in the presence of gene-environment interaction, we develop score tests in general semiparametric regression problems that involves Tukey style 1 degree-of-freedom form of interaction between parametrically and non-parametrically modelled covariates. We find that the score test in this type of model, as recently developed by Chatterjee and co-workers in the fully parametric setting, is biased and requires undersmoothing to be valid in the presence of non-parametric components. Moreover, in the presence of repeated outcomes, the asymptotic distribution of the score test depends on the estimation of functions which are defined as solutions of integral equations, making implementation difficult and computationally taxing. We develop profiled score statistics which are unbiased and asymptotically efficient and can be performed by using standard bandwidth selection methods. In addition, to overcome the difficulty of solving functional equations, we give easy interpretations of the target functions, which in turn allow us to develop estimation procedures that can be easily implemented by using standard computational methods. We present simulation studies to evaluate type I error and power of the method proposed compared with a naive test that does not consider interaction. Finally, we illustrate our methodology by analysing data from a case-control study of colorectal adenoma that was designed to investigate the association between colorectal adenoma and the candidate gene NAT2 in relation to smoking history.

  17. Fisher information and asymptotic normality in system identification for quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guta, Madalin

    2011-06-15

    This paper deals with the problem of estimating the coupling constant {theta} of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of {theta} whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolutemore » bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of {theta}=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.« less

  18. Adaptive control: Myths and realities

    NASA Technical Reports Server (NTRS)

    Athans, M.; Valavani, L.

    1984-01-01

    It was found that all currently existing globally stable adaptive algorithms have three basic properties in common: positive realness of the error equation, square-integrability of the parameter adjustment law and, need for sufficient excitation for asymptotic parameter convergence. Of the three, the first property is of primary importance since it satisfies a sufficient condition for stabillity of the overall system, which is a baseline design objective. The second property has been instrumental in the proof of asymptotic error convergence to zero, while the third addresses the issue of parameter convergence. Positive-real error dynamics can be generated only if the relative degree (excess of poles over zeroes) of the process to be controlled is known exactly; this, in turn, implies perfect modeling. This and other assumptions, such as absence of nonminimum phase plant zeros on which the mathematical arguments are based, do not necessarily reflect properties of real systems. As a result, it is natural to inquire what happens to the designs under less than ideal assumptions. The issues arising from violation of the exact modeling assumption which is extremely restrictive in practice and impacts the most important system property, stability, are discussed.

  19. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods. [Combined Release and Radiation Effects Satellite

    NASA Technical Reports Server (NTRS)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-01-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  20. Active controllers and the time duration to learn a task

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Goodyear, C.

    1986-01-01

    An active controller was used to help train naive subjects involved in a compensatory tracking task. The controller is called active in this context because it moves the subject's hand in a direction to improve tracking. It is of interest here to question whether the active controller helps the subject to learn a task more rapidly than the passive controller. Six subjects, inexperienced to compensatory tracking, were run to asymptote root mean square error tracking levels with an active controller or a passive controller. The time required to learn the task was defined several different ways. The results of the different measures of learning were examined across pools of subjects and across controllers using statistical tests. The comparison between the active controller and the passive controller as to their ability to accelerate the learning process as well as reduce levels of asymptotic tracking error is reported here.

  1. Hazard Function Estimation with Cause-of-Death Data Missing at Random

    PubMed Central

    Wang, Qihua; Dinse, Gregg E.; Liu, Chunling

    2010-01-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874

  2. Asymptotic Normalization Coefficients in a Potential Model Involving Forbidden States

    NASA Astrophysics Data System (ADS)

    Blokhintsev, L. D.; Savin, D. A.

    2018-03-01

    It is shown that values obtained for asymptotic normalization coefficients by means of a potential fitted to experimental data on elastic scattering depend substantially on the presence and the number n of possible forbidden states in the fitted potential. The present analysis was performed within exactly solvable potential models for various nuclear systems and various potentials without and with allowance for Coulomb interaction. Various methods for changing the number n that are based on the use of various versions of the change in the parameters of the potential model were studied. A compact analytic expression for the asymptotic normalization coefficients was derived for the case of the Hulthén potential. Specifically, the d + α and α + 12C systems, which are of importance for astrophysics, were examined. It was concluded that an incorrect choice of n could lead to a substantial errors in determining the asymptotic normalization coefficients. From the results of our calculations, it also follows that, for systems with a low binding energy and, as a consequence, with a large value of the Coulomb parameter, the inclusion of the Coulomb interaction may radically change the asymptotic normalization coefficients, increasing them sharply.

  3. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  4. Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Tan, Vincent Y. F.

    2015-08-01

    We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.

  5. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers

    PubMed Central

    Han, Buhm; Kang, Hyun Min; Eskin, Eleazar

    2009-01-01

    With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255

  6. Dynamical Approach Study of Spurious Steady-State Numerical Solutions of Nonlinear Differential Equations. 2; Global Asymptotic Behavior of Time Discretizations; 2. Global Asymptotic Behavior of time Discretizations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1995-01-01

    The global asymptotic nonlinear behavior of 1 1 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODES) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDES.

  7. Adaptive tracking control for a class of stochastic switched systems

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Xia, Yuanqing

    2018-02-01

    The problem of adaptive tracking is considered for a class of stochastic switched systems, in this paper. As preliminaries, the criterion of global asymptotical practical stability in probability is first presented by the aid of common Lyapunov function method. Based on the Lyapunov stability criterion, adaptive backstepping controllers are designed to guarantee that the closed-loop system has a unique global solution, which is globally asymptotically practically stable in probability, and the tracking error in the fourth moment converges to an arbitrarily small neighbourhood of zero. Simulation examples are given to demonstrate the efficiency of the proposed schemes.

  8. Asymptotic Poincare lemma and its applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziolkowski, R.W.; Deschamps, G.A.

    1984-05-01

    An asymptotic version of Poincare's lemma is defined and solutions are obtained with the calculus of exterior differential forms. They are used to construct the asymptotic approximations of multidimensional oscillatory integrals whose forms are commonly encountered, for example, in electromagnetic problems. In particular, the boundary and stationary point evaluations of these integrals are considered. The former is applied to the Kirchhoff representation of a scalar field diffracted through an aperture and simply recovers the Maggi-Rubinowicz-Miyamoto-Wolf results. Asymptotic approximations in the presence of other (standard) critical points are also discussed. Techniques developed for the asymptotic Poincare lemma are used to generatemore » a general representation of the Leray form. All of the (differential form) expressions presented are generalizations of known (vector calculus) results. 14 references, 4 figures.« less

  9. Directions for model building from asymptotic safety

    NASA Astrophysics Data System (ADS)

    Bond, Andrew D.; Hiller, Gudrun; Kowalska, Kamila; Litim, Daniel F.

    2017-08-01

    Building on recent advances in the understanding of gauge-Yukawa theories we explore possibilities to UV-complete the Standard Model in an asymptotically safe manner. Minimal extensions are based on a large flavor sector of additional fermions coupled to a scalar singlet matrix field. We find that asymptotic safety requires fermions in higher representations of SU(3) C × SU(2) L . Possible signatures at colliders are worked out and include R-hadron searches, diboson signatures and the evolution of the strong and weak coupling constants.

  10. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  11. Use of multivariable asymptotic expansions in a satellite theory

    NASA Technical Reports Server (NTRS)

    Dallas, S. S.

    1973-01-01

    Initial conditions and perturbative force of satellite are restricted to yield motion of equatorial satellite about oblate body. In this manner, exact analytic solution exists and can be used as standard of comparison in numerical accuracy comparisons. Detailed numerical accuracy studies of uniformly valid asymptotic expansions were made.

  12. Asymptotic Analysis of Time-Dependent Neutron Transport Coupled with Isotopic Depletion and Radioactive Decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brantley, P S

    2006-09-27

    We describe an asymptotic analysis of the coupled nonlinear system of equations describing time-dependent three-dimensional monoenergetic neutron transport and isotopic depletion and radioactive decay. The classic asymptotic diffusion scaling of Larsen and Keller [1], along with a consistent small scaling of the terms describing the radioactive decay of isotopes, is applied to this coupled nonlinear system of equations in a medium of specified initial isotopic composition. The analysis demonstrates that to leading order the neutron transport equation limits to the standard time-dependent neutron diffusion equation with macroscopic cross sections whose number densities are determined by the standard system of ordinarymore » differential equations, the so-called Bateman equations, describing the temporal evolution of the nuclide number densities.« less

  13. SLDAssay: A software package and web tool for analyzing limiting dilution assays.

    PubMed

    Trumble, Ilana M; Allmon, Andrew G; Archin, Nancie M; Rigdon, Joseph; Francis, Owen; Baldoni, Pedro L; Hudgens, Michael G

    2017-11-01

    Serial limiting dilution (SLD) assays are used in many areas of infectious disease related research. This paper presents SLDAssay, a free and publicly available R software package and web tool for analyzing data from SLD assays. SLDAssay computes the maximum likelihood estimate (MLE) for the concentration of target cells, with corresponding exact and asymptotic confidence intervals. Exact and asymptotic goodness of fit p-values, and a bias-corrected (BC) MLE are also provided. No other publicly available software currently implements the BC MLE or the exact methods. For validation of SLDAssay, results from Myers et al. (1994) are replicated. Simulations demonstrate the BC MLE is less biased than the MLE. Additionally, simulations demonstrate that exact methods tend to give better confidence interval coverage and goodness-of-fit tests with lower type I error than the asymptotic methods. Additional advantages of using exact methods are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A global perspective of the limits of prediction skill based on the ECMWF ensemble

    NASA Astrophysics Data System (ADS)

    Zagar, Nedjeljka

    2016-04-01

    In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.

  15. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  16. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  17. Asteroid approach covariance analysis for the Clementine mission

    NASA Technical Reports Server (NTRS)

    Ionasescu, Rodica; Sonnabend, David

    1993-01-01

    The Clementine mission is designed to test Strategic Defense Initiative Organization (SDIO) technology, the Brilliant Pebbles and Brilliant Eyes sensors, by mapping the moon surface and flying by the asteroid Geographos. The capability of two of the instruments available on board the spacecraft, the lidar (laser radar) and the UV/Visible camera is used in the covariance analysis to obtain the spacecraft delivery uncertainties at the asteroid. These uncertainties are due primarily to asteroid ephemeris uncertainties. On board optical navigation reduces the uncertainty in the knowledge of the spacecraft position in the direction perpendicular to the incoming asymptote to a one-sigma value of under 1 km, at the closest approach distance of 100 km. The uncertainty in the knowledge of the encounter time is about 0.1 seconds for a flyby velocity of 10.85 km/s. The magnitude of these uncertainties is due largely to Center Finding Errors (CFE). These systematic errors represent the accuracy expected in locating the center of the asteroid in the optical navigation images, in the absence of a topographic model for the asteroid. The direction of the incoming asymptote cannot be estimated accurately until minutes before the asteroid flyby, and correcting for it would require autonomous navigation. Orbit determination errors dominate over maneuver execution errors, and the final delivery accuracy attained is basically the orbit determination uncertainty before the final maneuver.

  18. Asymptotic radiance and polarization in optically thick media: ocean and clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1976-12-01

    Deep in a homogeneous medium that both scatters and absorbs photons, such as a cloud, the ocean, or a thick planetary atmosphere, the radiance decreases exponentially with depth, while the angular dependence of the radiance and polarization is independent of depth. In this diffusion region, the asymptotic radiance and polarization are also independent of the incident distribution of radiation at the upper surface of the medium. An exact expression is derived for the asymptotic radiance and polarization for Rayleigh scattering. The approximate expression for the asymptotic radiance derived from the scalar theory is shown to be in error by as much as 16.4%. An exact expression is also derived for the relation between the diffusion exponent k and the single scattering albedo. A method is developed for the numerical calculation of the asymptotic radiance and polarization for any scattering matrix. Results are given for scattering from the haze L and cloud C3 distributions for a wide range of single scattering albedos. When the absorption is large, the polarization in the diffusion region approaches the values obtained for single scattered photons, while the radiance approaches the value calculated from the expression: phase function divided by (1 + kmicro), where micro is the cosine of the zenith angle. The asymptotic distribution of the radiation is of interest since it depends only on the inherent optical properties of the medium. It is, however, difficult to observe when the absorption is large because of the very low radiance values in the diffusion region.

  19. On a method for generating inequalities for the zeros of certain functions

    NASA Astrophysics Data System (ADS)

    Gatteschi, Luigi; Giordano, Carla

    2007-10-01

    In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.

  20. Multiple-copy state discrimination: Thinking globally, acting locally

    NASA Astrophysics Data System (ADS)

    Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.

    2011-05-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  1. Multiple-copy state discrimination: Thinking globally, acting locally

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.

    2011-05-15

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less

  2. Tunnel ionization of atoms and molecules: How accurate are the weak-field asymptotic formulas?

    NASA Astrophysics Data System (ADS)

    Labeye, Marie; Risoud, François; Maquet, Alfred; Caillat, Jérémie; Taïeb, Richard

    2018-05-01

    Weak-field asymptotic formulas for the tunnel ionization rate of atoms and molecules in strong laser fields are often used for the analysis of strong field recollision experiments. We investigate their accuracy and domain of validity for different model systems by confronting them to exact numerical results, obtained by solving the time dependent Schrödinger equation. We find that corrections that take the dc-Stark shift into account are a simple and efficient way to improve the formula. Furthermore, analyzing the different approximations used, we show that error compensation plays a crucial role in the fair agreement between exact and analytical results.

  3. Dynamical Approach Study of Spurious Steady-State Numerical Solutions of Nonlinear Differential Equations. Part 2; Global Asymptotic Behavior of Time Discretizations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1995-01-01

    The global asymptotic nonlinear behavior of 11 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDEs.

  4. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  5. High-Threshold Low-Overhead Fault-Tolerant Classical Computation and the Replacement of Measurements with Unitary Quantum Gates.

    PubMed

    Cruikshank, Benjamin; Jacobs, Kurt

    2017-07-21

    von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.

  6. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  7. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  8. Regression calibration for models with two predictor variables measured with error and their interaction, using instrumental variables and longitudinal data.

    PubMed

    Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan

    2014-02-10

    Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  10. Revisiting the Estimation of Dinosaur Growth Rates

    PubMed Central

    Myhrvold, Nathan P.

    2013-01-01

    Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133

  11. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    PubMed

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Tests of Independence for Ordinal Data Using Bootstrap.

    ERIC Educational Resources Information Center

    Chan, Wai; Yung, Yiu-Fai; Bentler, Peter M.; Tang, Man-Lai

    1998-01-01

    Two bootstrap tests are proposed to test the independence hypothesis in a two-way cross table. Monte Carlo studies are used to compare the traditional asymptotic test with these bootstrap methods, and the bootstrap methods are found superior in two ways: control of Type I error and statistical power. (SLD)

  13. Propensity Score Weighting with Error-Prone Covariates

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Lockwood, J. R.; Setodji, Claude M.

    2011-01-01

    Inverse probability weighting (IPW) estimates are widely used in applications where data are missing due to nonresponse or censoring or in observational studies of causal effects where the counterfactuals cannot be observed. This extensive literature has shown the estimators to be consistent and asymptotically normal under very general conditions,…

  14. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  15. Efficient Estimation of the Standardized Value

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    2009-01-01

    We derive an estimator of the standardized value which, under the standard assumptions of normality and homoscedasticity, is more efficient than the established (asymptotically efficient) estimator and discuss its gains for small samples. (Contains 1 table and 3 figures.)

  16. Improved actions and asymptotic scaling in lattice Yang-Mills theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langfeld, Kurt

    2007-11-01

    Improved actions in SU(2) and SU(3) lattice gauge theories are investigated with an emphasis on asymptotic scaling. A new scheme for tadpole improvement is proposed. The standard but heuristic tadpole improvement emerges from a mean field approximation from the new approach. Scaling is investigated by means of the large distance static quark potential. Both the generic and the new tadpole scheme yield significant improvements on asymptotic scaling when compared with loop improved actions. A study of the rotational symmetry breaking terms, however, reveals that only the new improvement scheme efficiently eliminates the leading irrelevant term from the action.

  17. Parameter estimation of qubit states with unknown phase parameter

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2015-02-01

    We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.

  18. Multigrid solutions to quasi-elliptic schemes

    NASA Technical Reports Server (NTRS)

    Brandt, A.; Taasan, S.

    1985-01-01

    Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate then corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.

  19. Multigrid solutions to quasi-elliptic schemes

    NASA Technical Reports Server (NTRS)

    Brandt, A.; Taasan, S.

    1985-01-01

    Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate than corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.

  20. A novel body frame based approach to aerospacecraft attitude tracking.

    PubMed

    Ma, Carlos; Chen, Michael Z Q; Lam, James; Cheung, Kie Chung

    2017-09-01

    In the common practice of designing an attitude tracker for an aerospacecraft, one transforms the Newton-Euler rotation equations to obtain the dynamic equations of some chosen inertial frame based attitude metrics, such as Euler angles and unit quaternions. A Lyapunov approach is then used to design a controller which ensures asymptotic convergence of the attitude to the desired orientation. Although this design methodology is pretty standard, it usually involves singularity-prone coordinate transformations which complicates the analysis process and controller design. A new, singularity free error feedback method is proposed in the paper to provide simple and intuitive stability analysis and controller synthesis. This new body frame based method utilizes the concept of Euleraxis and angles to generate the smallest error angles from a body frame perspective, without coordinate transformations. Global tracking convergence is illustrated with the use of a feedback linearizing PD tracker, a sliding mode controller, and a model reference adaptive controller. Experimental results are also obtained on a quadrotor platform with unknown system parameters and disturbances, using a boundary layer approximated sliding mode controller, a PIDD controller, and a unit sliding mode controller. Significant tracking quality is attained. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Robust ADP Design for Continuous-Time Nonlinear Systems With Output Constraints.

    PubMed

    Fan, Bo; Yang, Qinmin; Tang, Xiaoyu; Sun, Youxian

    2018-06-01

    In this paper, a novel robust adaptive dynamic programming (RADP)-based control strategy is presented for the optimal control of a class of output-constrained continuous-time unknown nonlinear systems. Our contribution includes a step forward beyond the usual optimal control result to show that the output of the plant is always within user-defined bounds. To achieve the new results, an error transformation technique is first established to generate an equivalent nonlinear system, whose asymptotic stability guarantees both the asymptotic stability and the satisfaction of the output restriction of the original system. Furthermore, RADP algorithms are developed to solve the transformed nonlinear optimal control problem with completely unknown dynamics as well as a robust design to guarantee the stability of the closed-loop systems in the presence of unavailable internal dynamic state. Via small-gain theorem, asymptotic stability of the original and transformed nonlinear system is theoretically guaranteed. Finally, comparison results demonstrate the merits of the proposed control policy.

  2. Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'

    NASA Astrophysics Data System (ADS)

    Ottersten, B. E.; Viberg, M.; Kailath, T.

    1989-11-01

    This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.

  3. Total recall in distributive associative memories

    NASA Technical Reports Server (NTRS)

    Danforth, Douglas G.

    1991-01-01

    Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.

  4. Utility of the Mantel-Haenszel Procedure for Detecting Differential Item Functioning in Small Samples

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose

    2004-01-01

    Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…

  5. N%-Superconvergence of Finite Element Approximations in the Interior of General Meshes of Triangles

    DTIC Science & Technology

    1993-12-01

    RODiGuEz, On the asymptotic exactness of error estimators for linear triangular finite elements, Numer. Math., 59 (1991), pp. 107-127. 27. R. DURAN ...WAHLDIN, Interior maxmum norma estimates for finite element methods, Part H, unpublished manuscript. 38. I. BABUfKA, T. STROUBOULIS, A. MATHU. AND C.S

  6. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  7. Molecular Excitation Energies from Time-Dependent Density Functional Theory Employing Random-Phase Approximation Hessians with Exact Exchange.

    PubMed

    Heßelmann, Andreas

    2015-04-14

    Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.

  8. Cooperative global optimal preview tracking control of linear multi-agent systems: an internal model approach

    NASA Astrophysics Data System (ADS)

    Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang

    2017-09-01

    This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.

  9. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  10. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  11. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  12. Determination of structure parameters in strong-field tunneling ionization theory of molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Songfeng; Jin Cheng; College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, Gansu 730070

    2010-03-15

    In the strong field molecular tunneling ionization theory of Tong et al. [Phys. Rev. A 66, 033402 (2002)], the ionization rate depends on the asymptotic wave function of the molecular orbital from which the electron is removed. The orbital wave functions obtained from standard quantum chemistry packages in general are not good enough in the asymptotic region. Here we construct a one-electron model potential for several linear molecules using density functional theory. We show that the asymptotic wave function can be improved with an iteration method and after one iteration accurate asymptotic wave functions and structure parameters are determined. Withmore » the new parameters we examine the alignment-dependent tunneling ionization probabilities for several molecules and compare with other calculations and with recent measurements, including ionization from inner molecular orbitals.« less

  13. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  14. Design implementation in model-reference adaptive systems. [application and implementation on space shuttle

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1973-01-01

    The derivation of an approximate error characteristic equation describing the transient system error response is given, along with a procedure for selecting adaptive gain parameters so as to relate to the transient error response. A detailed example of the application and implementation of these methods for a space shuttle type vehicle is included. An extension of the characteristic equation technique is used to provide an estimate of the magnitude of the maximum system error and an estimate of the time of occurrence of this maximum after a plant parameter disturbance. Techniques for relaxing certain stability requirements and the conditions under which this can be done and still guarantee asymptotic stability of the system error are discussed. Such conditions are possible because the Lyapunov methods used in the stability derivation allow for overconstraining a problem in the process of insuring stability.

  15. Asymptotic formulae for flow in superhydrophobic channels with longitudinal ridges and protruding menisci

    NASA Astrophysics Data System (ADS)

    Kirk, Toby L.

    2018-03-01

    This paper presents new analytical formulae for flow in a channel with one or both walls patterned with a longitudinal array of ridges and arbitrarily protruding menisci. Derived from a matched asymptotic expansion, they extend results by Crowdy (J. Fluid Mech., vol. 791, 2016, R7) for shear flow, and thus make no restriction on the protrusion into or out of the liquid. The slip length formula is compared against full numerical solutions and, despite the assumption of small ridge period in its derivation, is found to have a very large range of validity; relative errors are small even for periods large enough for the protruding menisci to degrade the flow and touch the opposing wall.

  16. A Parameterized Pattern-Error Objective for Large-Scale Phase-Only Array Pattern Design

    DTIC Science & Technology

    2016-03-21

    12 4.4 Example 3: Sector Beam w/ Nonuniform Amplitude...fixed uniform amplitude illumination, phase-only optimization can also find application to arrays with fixed but nonuniform tapers. Such fixed tapers...arbitrary element locations nonuniform FFT algorithms exist [43–45] that have the same asymptotic complexity as the conventional FFT, although the

  17. An asymptotic safety scenario for gauged chiral Higgs-Yukawa models

    NASA Astrophysics Data System (ADS)

    Gies, Holger; Rechenberger, Stefan; Scherer, Michael M.; Zambelli, Luca

    2013-12-01

    We investigate chiral Higgs-Yukawa models with a non-abelian gauged left-handed sector reminiscent to a sub-sector of the standard model. We discover a new weak-coupling fixed-point behavior that allows for ultraviolet complete RG trajectories which can be connected with a conventional long-range infrared behavior in the Higgs phase. This non-trivial ultraviolet behavior is characterized by asymptotic freedom in all interaction couplings, but a quasi conformal behavior in all mass-like parameters. The stable microscopic scalar potential asymptotically approaches flatness in the ultraviolet, however, with a non-vanishing minimum increasing inversely proportional to the asymptotically free gauge coupling. This gives rise to non-perturbative—though weak-coupling—threshold effects which induce ultraviolet stability along a line of fixed points. Despite the weak-coupling properties, the system exhibits non-Gaußian features which are distinctly different from its standard perturbative counterpart: e.g., on a branch of the line of fixed points, we find linear instead of quadratically running renormalization constants. Whereas the Fermi constant and the top mass are naturally of the same order of magnitude, our model generically allows for light Higgs boson masses. Realistic mass ratios are related to particular RG trajectories with a "walking" mid-momentum regime.

  18. Asymptotic, multigroup flux reconstruction and consistent discontinuity factors

    DOE PAGES

    Trahan, Travis J.; Larsen, Edward W.

    2015-05-12

    Recent theoretical work has led to an asymptotically derived expression for reconstructing the neutron flux from lattice functions and multigroup diffusion solutions. The leading-order asymptotic term is the standard expression for flux reconstruction, i.e., it is the product of a shape function, obtained through a lattice calculation, and the multigroup diffusion solution. The first-order asymptotic correction term is significant only where the gradient of the diffusion solution is not small. Inclusion of this first-order correction term can significantly improve the accuracy of the reconstructed flux. One may define discontinuity factors (DFs) to make certain angular moments of the reconstructed fluxmore » continuous across interfaces between assemblies in 1-D. Indeed, the standard assembly discontinuity factors make the zeroth moment (scalar flux) of the reconstructed flux continuous. The inclusion of the correction term in the flux reconstruction provides an additional degree of freedom that can be used to make two angular moments of the reconstructed flux continuous across interfaces by using current DFs in addition to flux DFs. Thus, numerical results demonstrate that using flux and current DFs together can be more accurate than using only flux DFs, and that making the second angular moment continuous can be more accurate than making the zeroth moment continuous.« less

  19. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. FAST TRACK COMMUNICATION: The unusual asymptotics of three-sided prudent polygons

    NASA Astrophysics Data System (ADS)

    Beaton, Nicholas R.; Flajolet, Philippe; Guttmann, Anthony J.

    2010-08-01

    We have studied the area-generating function of prudent polygons on the square lattice. Exact solutions are obtained for the generating function of two-sided and three-sided prudent polygons, and a functional equation is found for four-sided prudent polygons. This is used to generate series coefficients in polynomial time, and these are analysed to determine the asymptotics numerically. A careful asymptotic analysis of the three-sided polygons produces a most surprising result. A transcendental critical exponent is found, and the leading amplitude is not quite a constant, but is a constant plus a small oscillatory component with an amplitude approximately 10-8 times that of the leading amplitude. This effect cannot be seen by any standard numerical analysis, but it may be present in other models. If so, it changes our whole view of the asymptotic behaviour of lattice models.

  1. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  2. Analysis of the “naming game” with learning errors in communications

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  3. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  4. Neural network-based model reference adaptive control system.

    PubMed

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  5. Model reference tracking control of an aircraft: a robust adaptive approach

    NASA Astrophysics Data System (ADS)

    Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan

    2017-05-01

    This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.

  6. Asymptotic (h tending to infinity) absolute stability for BDFs applied to stiff differential equations. [Backward Differentiation Formulas

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Stewart, K.

    1984-01-01

    Methods based on backward differentiation formulas (BDFs) for solving stiff differential equations require iterating to approximate the solution of the corrector equation on each step. One hope for reducing the cost of this is to make do with iteration matrices that are known to have errors and to do no more iterations than are necessary to maintain the stability of the method. This paper, following work by Klopfenstein, examines the effect of errors in the iteration matrix on the stability of the method. Application of the results to an algorithm is discussed briefly.

  7. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  8. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    PubMed

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  10. Asymptotic structure of the Einstein-Maxwell theory on AdS3

    NASA Astrophysics Data System (ADS)

    Pérez, Alfredo; Riquelme, Miguel; Tempo, David; Troncoso, Ricardo

    2016-02-01

    The asymptotic structure of AdS spacetimes in the context of General Relativity coupled to the Maxwell field in three spacetime dimensions is analyzed. Although the fall-off of the fields is relaxed with respect to that of Brown and Henneaux, the variation of the canonical generators associated to the asymptotic Killing vectors can be shown to be finite once required to span the Lie derivative of the fields. The corresponding surface integrals then acquire explicit contributions from the electromagnetic field, and become well-defined provided they fulfill suitable integrability conditions, implying that the leading terms of the asymptotic form of the electromagnetic field are functionally related. Consequently, for a generic choice of boundary conditions, the asymptotic symmetries are broken down to {R}⊗ U(1)⊗ U(1) . Nonetheless, requiring compatibility of the boundary conditions with one of the asymptotic Virasoro symmetries, singles out the set to be characterized by an arbitrary function of a single variable, whose precise form depends on the choice of the chiral copy. Remarkably, requiring the asymptotic symmetries to contain the full conformal group selects a very special set of boundary conditions that is labeled by a unique constant parameter, so that the algebra of the canonical generators is given by the direct sum of two copies of the Virasoro algebra with the standard central extension and U (1). This special set of boundary conditions makes the energy spectrum of electrically charged rotating black holes to be well-behaved.

  11. Shared Dosimetry Error in Epidemiological Dose-Response Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. Use of these methods for several studies, including the Mayak Worker Cohort and the U.S. Atomic Veterans Study, is discussed.« less

  12. Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2016-02-22

    A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.

  13. Consistency and convergence for numerical radiation conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.

  14. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  15. A Note on the Estimator of the Alpha Coefficient for Standardized Variables Under Normality

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Kamata, Akihito

    2005-01-01

    The asymptotic standard deviation (SD) of the alpha coefficient with standardized variables is derived under normality. The research shows that the SD of the standardized alpha coefficient becomes smaller as the number of examinees and/or items increase. Furthermore, this research shows that the degree of the dependence of the SD on the number of…

  16. Phase structure of completely asymptotically free SU(Nc) models with quarks and scalar quarks

    NASA Astrophysics Data System (ADS)

    Hansen, F. F.; Janowski, T.; Langæble, K.; Mann, R. B.; Sannino, F.; Steele, T. G.; Wang, Z. W.

    2018-03-01

    We determine the phase diagram of completely asymptotically free SU (Nc) gauge theories featuring Ns complex scalars and Nf Dirac quarks transforming according to the fundamental representation of the gauge group. The analysis is performed at the maximum known order in perturbation theory. We unveil a very rich dynamics and associated phase structure. Intriguingly, we discover that the completely asymptotically free conditions guarantee that the infrared dynamics displays long-distance conformality, and in a regime when perturbation theory is applicable. We conclude our analysis by determining the quantum corrected potential of the model and summarizing the possible patterns of radiative symmetry breaking. These models are of potential phenomenological interest as either elementary or composite ultraviolet finite extensions of the standard model.

  17. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE PAGES

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...

    2015-10-30

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  18. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  19. Strong Converse Exponents for a Quantum Channel Discrimination Problem and Quantum-Feedback-Assisted Communication

    NASA Astrophysics Data System (ADS)

    Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.

    2016-06-01

    This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.

  20. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  1. Robustness in linear quadratic feedback design with application to an aircraft control problem

    NASA Technical Reports Server (NTRS)

    Patel, R. V.; Sridhar, B.; Toda, M.

    1977-01-01

    Some new results concerning robustness and asymptotic properties of error bounds of a linear quadratic feedback design are applied to an aircraft control problem. An autopilot for the flare control of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA) is designed based on Linear Quadratic (LQ) theory and the results developed in this paper. The variation of the error bounds to changes in the weighting matrices in the LQ design is studied by computer simulations, and appropriate weighting matrices are chosen to obtain a reasonable error bound for variations in the system matrix and at the same time meet the practical constraints for the flare maneuver of the AWJSRA. Results from the computer simulation of a satisfactory autopilot design for the flare control of the AWJSRA are presented.

  2. A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.

    2001-08-01

    A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.

  3. Anomalous diffusion with linear reaction dynamics: from continuous time random walks to fractional reaction-diffusion equations.

    PubMed

    Henry, B I; Langlands, T A M; Wearne, S L

    2006-09-01

    We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.

  4. Generalized Ordinary Differential Equation Models 1

    PubMed Central

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-01-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787

  5. Generalized Ordinary Differential Equation Models.

    PubMed

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  6. Simplified formula for mean cycle-slip time of phase-locked loops with steady-state phase error.

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1972-01-01

    Previous work shows that the mean time from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip time for a loop of arbitrary degree tracking at a specified SNR and steady-state phase error. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.

  7. Eigenvalue asymptotics for the damped wave equation on metric graphs

    NASA Astrophysics Data System (ADS)

    Freitas, Pedro; Lipovský, Jiří

    2017-09-01

    We consider the linear damped wave equation on finite metric graphs and analyse its spectral properties with an emphasis on the asymptotic behaviour of eigenvalues. In the case of equilateral graphs and standard coupling conditions we show that there is only a finite number of high-frequency abscissas, whose location is solely determined by the averages of the damping terms on each edge. We further describe some of the possible behaviour when the edge lengths are no longer necessarily equal but remain commensurate.

  8. Output Containment Control of Linear Heterogeneous Multi-Agent Systems Using Internal Model Principle.

    PubMed

    Zuo, Shan; Song, Yongduan; Lewis, Frank L; Davoudi, Ali

    2017-01-04

    This paper studies the output containment control of linear heterogeneous multi-agent systems, where the system dynamics and even the state dimensions can generally be different. Since the states can have different dimensions, standard results from state containment control do not apply. Therefore, the control objective is to guarantee the convergence of the output of each follower to the dynamic convex hull spanned by the outputs of leaders. This can be achieved by making certain output containment errors go to zero asymptotically. Based on this formulation, two different control protocols, namely, full-state feedback and static output-feedback, are designed based on internal model principles. Sufficient local conditions for the existence of the proposed control protocols are developed in terms of stabilizing the local followers' dynamics and satisfying a certain H∞ criterion. Unified design procedures to solve the proposed two control protocols are presented by formulation and solution of certain local state-feedback and static output-feedback problems, respectively. Numerical simulations are given to validate the proposed control protocols.

  9. Multivariate longitudinal data analysis with censored and intermittent missing responses.

    PubMed

    Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun

    2018-05-08

    The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Multicopy programmable discrimination of general qubit states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sentis, G.; Bagan, E.; Calsamiglia, J.

    2010-10-15

    Quantum state discrimination is a fundamental primitive in quantum statistics where one has to correctly identify the state of a system that is in one of two possible known states. A programmable discrimination machine performs this task when the pair of possible states is not a priori known but instead the two possible states are provided through two respective program ports. We study optimal programmable discrimination machines for general qubit states when several copies of states are available in the data or program ports. Two scenarios are considered: One in which the purity of the possible states is a priorimore » known, and the fully universal one where the machine operates over generic mixed states of unknown purity. We find analytical results for both the unambiguous and minimum error discrimination strategies. This allows us to calculate the asymptotic performance of programmable discrimination machines when a large number of copies are provided and to recover the standard state discrimination and state comparison values as different limiting cases.« less

  11. Estimating residual fault hitting rates by recapture sampling

    NASA Technical Reports Server (NTRS)

    Lee, Larry; Gupta, Rajan

    1988-01-01

    For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.

  12. Robust adaptive cruise control of high speed trains.

    PubMed

    Faieghi, Mohammadreza; Jalali, Aliakbar; Mashhadi, Seyed Kamal-e-ddin Mousavi

    2014-03-01

    The cruise control problem of high speed trains in the presence of unknown parameters and external disturbances is considered. In particular a Lyapunov-based robust adaptive controller is presented to achieve asymptotic tracking and disturbance rejection. The system under consideration is nonlinear, MIMO and non-minimum phase. To deal with the limitations arising from the unstable zero-dynamics we do an output redefinition such that the zero-dynamics with respect to new outputs becomes stable. Rigorous stability analyses are presented which establish the boundedness of all the internal states and simultaneously asymptotic stability of the tracking error dynamics. The results are presented for two common configurations of high speed trains, i.e. the DD and PPD designs, based on the multi-body model and are verified by several numerical simulations. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Rigorous derivation of the effective model describing a non-isothermal fluid flow in a vertical pipe filled with porous medium

    NASA Astrophysics Data System (ADS)

    Beneš, Michal; Pažanin, Igor

    2018-03-01

    This paper reports an analytical investigation of non-isothermal fluid flow in a thin (or long) vertical pipe filled with porous medium via asymptotic analysis. We assume that the fluid inside the pipe is cooled (or heated) by the surrounding medium and that the flow is governed by the prescribed pressure drop between pipe's ends. Starting from the dimensionless Darcy-Brinkman-Boussinesq system, we formally derive a macroscopic model describing the effective flow at small Brinkman-Darcy number. The asymptotic approximation is given by the explicit formulae for the velocity, pressure and temperature clearly acknowledging the effects of the cooling (heating) and porous structure. The theoretical error analysis is carried out to indicate the order of accuracy and to provide a rigorous justification of the effective model.

  14. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  15. Boundary asymptotics for a non-neutral electrochemistry model with small Debye length

    NASA Astrophysics Data System (ADS)

    Lee, Chiun-Chang; Ryham, Rolf J.

    2018-04-01

    This article addresses the boundary asymptotics of the electrostatic potential in non-neutral electrochemistry models with small Debye length in bounded domains. Under standard physical assumptions motivated by non-electroneutral phenomena in oxidation-reduction reactions, we show that the electrostatic potential asymptotically blows up at boundary points with respect to the bulk reference potential as the scaled Debye length tends to zero. The analysis gives a lower bound for the blow-up rate with respect to the model parameters. Moreover, the maximum potential difference over any compact subset of the physical domain vanishes exponentially in the zero-Debye-length limit. The results mathematically confirm the physical description that electrolyte solutions are electrically neutral in the bulk and are strongly electrically non-neutral near charged surfaces.

  16. Asymptotic solutions for flow in microchannels with ridged walls and arbitrary meniscus protrusion

    NASA Astrophysics Data System (ADS)

    Kirk, Toby

    2017-11-01

    Flow over structured surfaces exhibiting apparent slip, such as parallel ridges, have received much attention experimentally and numerically, but analytical and asymptotic solutions that account for the microstructure have so far been limited to unbounded geometries such as shear-driven flows. Analysis for channel flows has been limited to (close to) flat interfaces spanning the grooves between ridges, but in applications the interfaces (menisci) can highly protrude and have a significant impact on the apparent slip. In this presentation, we consider pressure-driven flow through a microchannel with longitudinal ridges patterning one or both walls. With no restriction on the meniscus protrusion, we develop explicit formulae for the slip length using a formal matched asymptotic expansion. Assuming the ratio of channel height to ridge period is large, the periodicity is confined to an inner layer close to the ridges, and the expansion is found to all algebraic orders. As a result, the error is exponentially small and, under a further ``diluteness'' assumption, the explicit formulae are compared to finite element solutions. They are found to have a very wide range of validity in channel height (even when the menisci can touch the opposing wall) and so are useful for practitioners.

  17. Universality and tails of long-range interactions in one dimension

    NASA Astrophysics Data System (ADS)

    Valiente, Manuel; Öhberg, Patrik

    2017-07-01

    Long-range interactions and, in particular, two-body potentials with power-law long-distance tails are ubiquitous in nature. For two bosons or fermions in one spatial dimension, the latter case being formally equivalent to three-dimensional s -wave scattering, we show how generic asymptotic interaction tails can be accounted for in the long-distance limit of scattering wave functions. This is made possible by introducing a generalization of the collisional phase shifts to include space dependence. We show that this distance dependence is universal, in that it does not depend on short-distance details of the interaction. The energy dependence is also universal, and is fully determined by the asymptotic tails of the two-body potential. As an important application of our findings, we describe how to eliminate finite-size effects with long-range potentials in the calculation of scattering phase shifts from exact diagonalization. We show that even with moderately small system sizes it is possible to accurately extract phase shifts that would otherwise be plagued with finite-size errors. We also consider multichannel scattering, focusing on the estimation of open channel asymptotic interaction strengths via finite-size analysis.

  18. Two-parameter asymptotics in magnetic Weyl calculus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lein, Max

    2010-12-15

    This paper is concerned with small parameter asymptotics of magnetic quantum systems. In addition to a semiclassical parameter {epsilon}, the case of small coupling {lambda} to the magnetic vector potential naturally occurs in this context. Magnetic Weyl calculus is adapted to incorporate both parameters, at least one of which needs to be small. Of particular interest is the expansion of the Weyl product which can be used to expand the product of operators in a small parameter, a technique which is prominent to obtain perturbation expansions. Three asymptotic expansions for the magnetic Weyl product of two Hoermander class symbols aremore » proven as (i) {epsilon}<< 1 and {lambda}<< 1, (ii) {epsilon}<< 1 and {lambda}= 1, as well as (iii) {epsilon}= 1 and {lambda}<< 1. Expansions (i) and (iii) are impossible to obtain with ordinary Weyl calculus. Furthermore, I relate the results derived by ordinary Weyl calculus with those obtained with magnetic Weyl calculus by one- and two-parameter expansions. To show the power and versatility of magnetic Weyl calculus, I derive the semirelativistic Pauli equation as a scaling limit from the Dirac equation up to errors of fourth order in 1/c.« less

  19. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  20. The Problem of Modeling the Elastomechanics in Engineering

    DTIC Science & Technology

    1990-02-01

    element method by the code PROBE (McNeil Schwendler- Noetic ) and STRIPE (Aeronautical Institute of Sweden). These codes have various error checks so that...Mindlin solutions converge to the Kirchhoff solution as d--O, see eg. [12), [19]. For a detailed study of the asymptotic behavior of Reissner...of study and research for foreign students in numerical mathematics who are supported by foreign govern- ments or exchange agencies (Fulbright, etc

  1. Isotonic Regression Based-Method in Quantitative High-Throughput Screenings for Genotoxicity

    PubMed Central

    Fujii, Yosuke; Narita, Takeo; Tice, Raymond Richard; Takeda, Shunich

    2015-01-01

    Quantitative high-throughput screenings (qHTSs) for genotoxicity are conducted as part of comprehensive toxicology screening projects. The most widely used method is to compare the dose-response data of a wild-type and DNA repair gene knockout mutants, using model-fitting to the Hill equation (HE). However, this method performs poorly when the observed viability does not fit the equation well, as frequently happens in qHTS. More capable methods must be developed for qHTS where large data variations are unavoidable. In this study, we applied an isotonic regression (IR) method and compared its performance with HE under multiple data conditions. When dose-response data were suitable to draw HE curves with upper and lower asymptotes and experimental random errors were small, HE was better than IR, but when random errors were big, there was no difference between HE and IR. However, when the drawn curves did not have two asymptotes, IR showed better performance (p < 0.05, exact paired Wilcoxon test) with higher specificity (65% in HE vs. 96% in IR). In summary, IR performed similarly to HE when dose-response data were optimal, whereas IR clearly performed better in suboptimal conditions. These findings indicate that IR would be useful in qHTS for comparing dose-response data. PMID:26673567

  2. Scalar solitons and the microscopic entropy of hairy black holes in three dimensions

    NASA Astrophysics Data System (ADS)

    Correa, Francisco; Martínez, Cristián; Troncoso, Ricardo

    2011-01-01

    General Relativity coupled to a self-interacting scalar field in three dimensions is shown to admit exact analytic soliton solutions, such that the metric and the scalar field are regular everywhere. Since the scalar field acquires slow fall-off at infinity, the soliton describes an asymptotically AdS spacetime in a relaxed sense as compared with the one of Brown and Henneaux. Nevertheless, the asymptotic symmetry group remains to be the conformal group, and the algebra of the canonical generators possesses the standard central extension. For this class of asymptotic behavior, the theory also admits hairy black holes which raises some puzzles concerning a holographic derivation of their entropy à la Strominger. Since the soliton is devoid of integration constants, it has a fixed (negative) mass, and it can be naturally regarded as the ground state of the "hairy sector", for which the scalar field is switched on. This assumption allows to exactly reproduce the semiclassical hairy black hole entropy from the asymptotic growth of the number of states by means of Cardy formula. Particularly useful is expressing the asymptotic growth of the number of states only in terms of the spectrum of the Virasoro operators without making any explicit reference to the central charges.

  3. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  4. A new asymptotic method for jump phenomena

    NASA Technical Reports Server (NTRS)

    Reiss, E. L.

    1980-01-01

    Physical phenomena involving rapid and sudden transitions, such as snap buckling of elastic shells, explosions, and earthquakes, are characterized mathematically as a small disturbance causing a large-amplitude response. Because of this, standard asymptotic and perturbation methods are ill-suited to these problems. In the present paper, a new method of analyzing jump phenomena is proposed. The principal feature of the method is the representation of the response in terms of rational functions. For illustration, the method is applied to the snap buckling of an elastic arch and to a simple combustion problem.

  5. Asymptotic safety of gravity with matter

    NASA Astrophysics Data System (ADS)

    Christiansen, Nicolai; Litim, Daniel F.; Pawlowski, Jan M.; Reichert, Manuel

    2018-05-01

    We study the asymptotic safety conjecture for quantum gravity in the presence of matter fields. A general line of reasoning is put forward explaining why gravitons dominate the high-energy behavior, largely independently of the matter fields as long as these remain sufficiently weakly coupled. Our considerations are put to work for gravity coupled to Yang-Mills theories with the help of the functional renormalization group. In an expansion about flat backgrounds, explicit results for beta functions, fixed points, universal exponents, and scaling solutions are given in systematic approximations exploiting running propagators, vertices, and background couplings. Invariably, we find that the gauge coupling becomes asymptotically free while the gravitational sector becomes asymptotically safe. The dependence on matter field multiplicities is weak. We also explain how the scheme dependence, which is more pronounced, can be handled without changing the physics. Our findings offer a new interpretation of many earlier results, which is explained in detail. The results generalize to theories with minimally coupled scalar and fermionic matter. Some implications for the ultraviolet closure of the Standard Model or its extensions are given.

  6. Upper bound on the Abelian gauge coupling from asymptotic safety

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Versteegen, Fleur

    2018-01-01

    We explore the impact of asymptotically safe quantum gravity on the Abelian gauge coupling in a model including a charged scalar, confirming indications that asymptotically safe quantum fluctuations of gravity could trigger a power-law running towards a free fixed point for the gauge coupling above the Planck scale. Simultaneously, quantum gravity fluctuations balance against matter fluctuations to generate an interacting fixed point, which acts as a boundary of the basin of attraction of the free fixed point. This enforces an upper bound on the infrared value of the Abelian gauge coupling. In the regime of gravity couplings which in our approximation also allows for a prediction of the top quark and Higgs mass close to the experimental value [1], we obtain an upper bound approximately 35% above the infrared value of the hypercharge coupling in the Standard Model.

  7. Revisiting tests for neglected nonlinearity using artificial neural networks.

    PubMed

    Cho, Jin Seo; Ishida, Isao; White, Halbert

    2011-05-01

    Tests for regression neglected nonlinearity based on artificial neural networks (ANNs) have so far been studied by separately analyzing the two ways in which the null of regression linearity can hold. This implies that the asymptotic behavior of general ANN-based tests for neglected nonlinearity is still an open question. Here we analyze a convenient ANN-based quasi-likelihood ratio statistic for testing neglected nonlinearity, paying careful attention to both components of the null. We derive the asymptotic null distribution under each component separately and analyze their interaction. Somewhat remarkably, it turns out that the previously known asymptotic null distribution for the type 1 case still applies, but under somewhat stronger conditions than previously recognized. We present Monte Carlo experiments corroborating our theoretical results and showing that standard methods can yield misleading inference when our new, stronger regularity conditions are violated.

  8. Asymptotic properties of Pearson's rank-variate correlation coefficient under contaminated Gaussian model.

    PubMed

    Ma, Rubao; Xu, Weichao; Zhang, Yun; Ye, Zhongfu

    2014-01-01

    This paper investigates the robustness properties of Pearson's rank-variate correlation coefficient (PRVCC) in scenarios where one channel is corrupted by impulsive noise and the other is impulsive noise-free. As shown in our previous work, these scenarios that frequently encountered in radar and/or sonar, can be well emulated by a particular bivariate contaminated Gaussian model (CGM). Under this CGM, we establish the asymptotic closed forms of the expectation and variance of PRVCC by means of the well known Delta method. To gain a deeper understanding, we also compare PRVCC with two other classical correlation coefficients, i.e., Spearman's rho (SR) and Kendall's tau (KT), in terms of the root mean squared error (RMSE). Monte Carlo simulations not only verify our theoretical findings, but also reveal the advantage of PRVCC by an example of estimating the time delay in the particular impulsive noise environment.

  9. A new class of energy based control laws for revolute robot arms - Tracking control, robustness enhancement and adaptive control

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Kreutz, Kenneth; Bayard, David S.

    1988-01-01

    A class of joint-level control laws for all-revolute robot arms is introduced. The analysis is similar to the recently proposed energy Liapunov function approach except that the closed-loop potential function is shaped in accordance with the underlying joint space topology. By using energy Liapunov functions with the modified potential energy, a much simpler analysis can be used to show closed-loop global asymptotic stability and local exponential stability. When Coulomb and viscous friction and model parameter errors are present, a sliding-mode-like modification of the control law is proposed to add a robustness-enhancing outer loop. Adaptive control is also addressed within the same framework. A linear-in-the-parameters formulation is adopted, and globally asymptotically stable adaptive control laws are derived by replacing the model parameters in the nonadaptive control laws by their estimates.

  10. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  11. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  12. Security of a semi-quantum protocol where reflections contribute to the secret key

    NASA Astrophysics Data System (ADS)

    Krawec, Walter O.

    2016-05-01

    In this paper, we provide a proof of unconditional security for a semi-quantum key distribution protocol introduced in a previous work. This particular protocol demonstrated the possibility of using X basis states to contribute to the raw key of the two users (as opposed to using only direct measurement results) even though a semi-quantum participant cannot directly manipulate such states. In this work, we provide a complete proof of security by deriving a lower bound of the protocol's key rate in the asymptotic scenario. Using this bound, we are able to find an error threshold value such that for all error rates less than this threshold, it is guaranteed that A and B may distill a secure secret key; for error rates larger than this threshold, A and B should abort. We demonstrate that this error threshold compares favorably to several fully quantum protocols. We also comment on some interesting observations about the behavior of this protocol under certain noise scenarios.

  13. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  14. An Experimental Study of a Six Key Handprint Chord Keyboard.

    DTIC Science & Technology

    1986-05-01

    analysis: sequence time, list time, and errors, is better divided by group of tests, beginning or ending. This division forms a logical outline from which...accomplished pianists . Due to the limited amount of time at the keyboard that volunteers were willing to endure, asymptotic behavior was not reached...considerable attention , and it includes an idea of time 1152 quite different from that enunciated by Newton. According to this theory, 1226 there is no

  15. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  16. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  17. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  18. The genomic structure: proof of the role of non-coding DNA.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2006-01-01

    We prove that the introns play the role of a decoy in absorbing mutations in the same way hollow uninhabited structures are used by the military to protect important installations. Our approach is based on a probability of error analysis, where errors are mutations which occur in the exon sequences. We derive the optimal exon length distribution, which minimizes the probability of error in the genome. Furthermore, to understand how can Nature generate the optimal distribution, we propose a diffusive random walk model for exon generation throughout evolution. This model results in an alpha stable exon length distribution, which is asymptotically equivalent to the optimal distribution. Experimental results show that both distributions accurately fit the real data. Given that introns also drive biological evolution by increasing the rate of unequal crossover between genes, we conclude that the role of introns is to maintain a genius balance between stability and adaptability in eukaryotic genomes.

  19. On the conditions of exponential stability in active disturbance rejection control based on singular perturbation analysis

    NASA Astrophysics Data System (ADS)

    Shao, S.; Gao, Z.

    2017-10-01

    Stability of active disturbance rejection control (ADRC) is analysed in the presence of unknown, nonlinear, and time-varying dynamics. In the framework of singular perturbations, the closed-loop error dynamics are semi-decoupled into a relatively slow subsystem (the feedback loop) and a relatively fast subsystem (the extended state observer), respectively. It is shown, analytically and geometrically, that there exists a unique exponential stable solution if the size of the initial observer error is sufficiently small, i.e. in the same order of the inverse of the observer bandwidth. The process of developing the uniformly asymptotic solution of the system reveals the condition on the stability of the ADRC and the relationship between the rate of change in the total disturbance and the size of the estimation error. The differentiability of the total disturbance is the only assumption made.

  20. A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.

    PubMed

    Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng

    2016-05-01

    In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.

  1. Vortical and acoustical mode coupling inside a porous tube with uniform wall suction.

    PubMed

    Jankowskia, T A; Majdalani, J

    2005-06-01

    This paper considers the oscillatory motion of gases inside a long porous tube of the closed-open type. In particular, the focus is placed on describing an analytical solution for the internal acoustico-vortical coupling that arises in the presence of appreciable wall suction. This unsteady field is driven by longitudinal oscillatory waves that are triggered by small unavoidable fluctuations in the wall suction speed. Under the assumption of small amplitude oscillations, the time-dependent governing equations are linearized through a regular perturbation of the dependent variables. Further application of the Helmholtz vector decomposition theorem enables us to discriminate between acoustical and vortical equations. After solving the wave equation for the acoustical contribution, the boundary-driven vortical field is considered. The method of matched-asymptotic expansions is then used to obtain a closed-form solution for the unsteady momentum equation developing from flow decomposition. An exact series expansion is also derived and shown to coincide with the numerical solution for the problem. The numerically verified end results suggest that the asymptotic scheme is capable of providing a sufficiently accurate solution. This is due to the error associated with the matched-asymptotic expansion being smaller than the error introduced in the Navier-Stokes linearization. A basis for comparison is established by examining the evolution of the oscillatory field in both space and time. The corresponding boundary-layer behavior is also characterized over a range of oscillation frequencies and wall suction velocities. In general, the current solution is found to exhibit features that are consistent with the laminar theory of periodic flows. By comparison to the Sexl profile in nonporous tubes, the critically damped solution obtained here exhibits a slightly smaller overshoot and depth of penetration. These features may be attributed to the suction effect that tends to attract the shear layers closer the wall.

  2. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  3. Can we reconstruct mean and eddy fluxes from Argo floats?

    NASA Astrophysics Data System (ADS)

    Chapman, Christopher; Sallée, Jean-Baptiste

    2017-12-01

    The capacity of deep velocity estimates provided by the Argo float array to reconstruct both mean and eddying quantities, such as the heat flux, is addressed using an idealized eddy resolving numerical model, designed to be representative of the Southern Ocean. The model is seeded with 450 "virtual" Argo floats, which are then advected by the model fields for 10 years. The role of temporal sampling, array density and length of the float experiment are then systematically investigated by comparing the reconstructed velocity, eddy kinetic energy and heat-flux from the virtual Argo floats with the "true" values from the model output. We find that although errors in all three quantities decrease with increasing temporal sampling rate, number of floats and experiment duration, the error approaches an asymptotic limit. Thus, as these parameters exceed this limit, only marginal reductions in the error are observed. The parameters of the real Argo array, when scaled to match those of the virtual Argo array, generally fall near to, or within, the asymptotic region. Using the numerical model, a method for the calculation of cross-stream heat-fluxes is demonstrated. This methodology is then applied to 5 years of Argo derived velocities using the ANDRO dataset of Ollitrault & Rannou (2013) in order to estimate the eddy heat flux at 1000m depth across the Polar Front in the Southern Ocean. The heat-flux is concentrated in regions downstream of large bathymetric features, consistent with the results of previous studies. 2 ± 0.5 TW of heat transport across the Polar Front at this depth is found, with more than 90% of that total concentrated in less than 20% of the total longitudes spanned by the front. Finally, the implications of this work for monitoring the ocean climate are discussed.

  4. Detecting small-study effects and funnel plot asymmetry in meta-analysis of survival data: A comparison of new and existing tests.

    PubMed

    Debray, Thomas P A; Moons, Karel G M; Riley, Richard D

    2018-03-01

    Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.

  5. Information Theoretic Studies and Assessment of Space Object Identification

    DTIC Science & Technology

    2014-03-24

    localization are contained in Ref. [5]. 1.7.1 A Bayesian MPE Based Analysis of 2D Point-Source-Pair Superresolution In a second recently submitted paper [6], a...related problem of the optical superresolution (OSR) of a pair of equal-brightness point sources separated spatially by a distance (or angle) smaller...1403.4897 [physics.optics] (19 March 2014). 6. S. Prasad, “Asymptotics of Bayesian error probability and 2D pair superresolution ,” submitted to Opt. Express

  6. Three-axis stabilization of spacecraft using parameter-independent nonlinear quaternion feedback

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Kelkar, Atul G.

    1994-01-01

    This paper considers the problem of rigid spacecraft. A nonlinear control law which uses the feedback of the unit quaternion and the measured angular velocities is proposed and is shown to provide global asymptotic stability. The control law does not require the knowledge of the system parameters, and is therefore robust to modeling errors. The significance of the control law is that it can be used for large-angle maneuvers with guaranteed stability.

  7. Uniform analytic approximation of Wigner rotation matrices

    NASA Astrophysics Data System (ADS)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  8. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  9. Testing the non-unity of rate ratio under inverse sampling.

    PubMed

    Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing

    2007-08-01

    Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  10. Quasi-Likelihood Techniques in a Logistic Regression Equation for Identifying Simulium damnosum s.l. Larval Habitats Intra-cluster Covariates in Togo.

    PubMed

    Jacob, Benjamin G; Novak, Robert J; Toe, Laurent; Sanfo, Moussa S; Afriyie, Abena N; Ibrahim, Mohammed A; Griffith, Daniel A; Unnasch, Thomas R

    2012-01-01

    The standard methods for regression analyses of clustered riverine larval habitat data of Simulium damnosum s.l. a major black-fly vector of Onchoceriasis, postulate models relating observational ecological-sampled parameter estimators to prolific habitats without accounting for residual intra-cluster error correlation effects. Generally, this correlation comes from two sources: (1) the design of the random effects and their assumed covariance from the multiple levels within the regression model; and, (2) the correlation structure of the residuals. Unfortunately, inconspicuous errors in residual intra-cluster correlation estimates can overstate precision in forecasted S.damnosum s.l. riverine larval habitat explanatory attributes regardless how they are treated (e.g., independent, autoregressive, Toeplitz, etc). In this research, the geographical locations for multiple riverine-based S. damnosum s.l. larval ecosystem habitats sampled from 2 pre-established epidemiological sites in Togo were identified and recorded from July 2009 to June 2010. Initially the data was aggregated into proc genmod. An agglomerative hierarchical residual cluster-based analysis was then performed. The sampled clustered study site data was then analyzed for statistical correlations using Monthly Biting Rates (MBR). Euclidean distance measurements and terrain-related geomorphological statistics were then generated in ArcGIS. A digital overlay was then performed also in ArcGIS using the georeferenced ground coordinates of high and low density clusters stratified by Annual Biting Rates (ABR). This data was overlain onto multitemporal sub-meter pixel resolution satellite data (i.e., QuickBird 0.61m wavbands ). Orthogonal spatial filter eigenvectors were then generated in SAS/GIS. Univariate and non-linear regression-based models (i.e., Logistic, Poisson and Negative Binomial) were also employed to determine probability distributions and to identify statistically significant parameter estimators from the sampled data. Thereafter, Durbin-Watson test statistics were used to test the null hypothesis that the regression residuals were not autocorrelated against the alternative that the residuals followed an autoregressive process in AUTOREG. Bayesian uncertainty matrices were also constructed employing normal priors for each of the sampled estimators in PROC MCMC. The residuals revealed both spatially structured and unstructured error effects in the high and low ABR-stratified clusters. The analyses also revealed that the estimators, levels of turbidity and presence of rocks were statistically significant for the high-ABR-stratified clusters, while the estimators distance between habitats and floating vegetation were important for the low-ABR-stratified cluster. Varying and constant coefficient regression models, ABR- stratified GIS-generated clusters, sub-meter resolution satellite imagery, a robust residual intra-cluster diagnostic test, MBR-based histograms, eigendecomposition spatial filter algorithms and Bayesian matrices can enable accurate autoregressive estimation of latent uncertainity affects and other residual error probabilities (i.e., heteroskedasticity) for testing correlations between georeferenced S. damnosum s.l. riverine larval habitat estimators. The asymptotic distribution of the resulting residual adjusted intra-cluster predictor error autocovariate coefficients can thereafter be established while estimates of the asymptotic variance can lead to the construction of approximate confidence intervals for accurately targeting productive S. damnosum s.l habitats based on spatiotemporal field-sampled count data.

  11. Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method

    NASA Astrophysics Data System (ADS)

    Bekhoucha, F.; Rechak, S.; Cadou, J. M.

    2016-12-01

    In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.

  12. Shared dosimetry error in epidemiological dose-response analyses

    DOE PAGES

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; ...

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less

  13. Asymptotics with a positive cosmological constant: I. Basic framework

    NASA Astrophysics Data System (ADS)

    Ashtekar, Abhay; Bonga, Béatrice; Kesavan, Aruna

    2015-01-01

    The asymptotic structure of the gravitational field of isolated systems has been analyzed in great detail in the case when the cosmological constant Λ is zero. The resulting framework lies at the foundation of research in diverse areas in gravitational science. Examples include: (i) positive energy theorems in geometric analysis; (ii) the coordinate invariant characterization of gravitational waves in full, nonlinear general relativity; (iii) computations of the energy-momentum emission in gravitational collapse and binary mergers in numerical relativity and relativistic astrophysics; and (iv) constructions of asymptotic Hilbert spaces to calculate S-matrices and analyze the issue of information loss in the quantum evaporation of black holes. However, by now observations have led to a strong consensus that Λ is positive in our universe. In this paper we show that, unfortunately, the standard framework does not extend from the Λ =0 case to the Λ \\gt 0 case in a physically useful manner. In particular, we do not have positive energy theorems, nor an invariant notion of gravitational waves in the nonlinear regime, nor asymptotic Hilbert spaces in dynamical situations of semi-classical gravity. A suitable framework to address these conceptual issues of direct physical importance is developed in subsequent papers.

  14. Thin Interface Asymptotics for an Energy/Entropy Approach to Phase-Field Models with Unequal Conductivities

    NASA Technical Reports Server (NTRS)

    McFadden, G. B.; Wheeler, A. A.; Anderson, D. M.

    1999-01-01

    Karma and Rapped recently developed a new sharp interface asymptotic analysis of the phase-field equations that is especially appropriate for modeling dendritic growth at low undercoolings. Their approach relieves a stringent restriction on the interface thickness that applies in the conventional asymptotic analysis, and has the added advantage that interfacial kinetic effects can also be eliminated. However, their analysis focussed on the case of equal thermal conductivities in the solid and liquid phases; when applied to a standard phase-field model with unequal conductivities, anomalous terms arise in the limiting forms of the boundary conditions for the interfacial temperature that are not present in conventional sharp-interface solidification models, as discussed further by Almgren. In this paper we apply their asymptotic methodology to a generalized phase-field model which is derived using a thermodynamically consistent approach that is based on independent entropy and internal energy gradient functionals that include double wells in both the entropy and internal energy densities. The additional degrees of freedom associated with the generalized phased-field equations can be chosen to eliminate the anomalous terms that arise for unequal conductivities.

  15. Simple, explicitly time-dependent, and regular solutions of the linearized vacuum Einstein equations in Bondi-Sachs coordinates

    NASA Astrophysics Data System (ADS)

    Mädler, Thomas

    2013-05-01

    Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.

  16. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems, ], based on an adaptive two stage Runge-Kutta-Gauss method with this discontinuous step-size policy.

  17. Neuromotor Noise Is Malleable by Amplifying Perceived Errors

    PubMed Central

    Zhang, Zhaoran; Abe, Masaki O.; Sternad, Dagmar

    2016-01-01

    Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants’ corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197

  18. Asymptotic Solutions for Optical Properties of Large Particles with Strong Absorption

    NASA Technical Reports Server (NTRS)

    Yang, Ping; Gao, Bo-Cai; Baum, Bryan A.; Hu, Yong X.; Wiscombe, Warren J.; Mishchenko, Michael I.; Winker, Dave M.; Nasiri, Shaima L.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    For scattering calculations involving nonspherical particles such as ice crystals, we show that the transverse wave condition is not applicable to the refracted electromagnetic wave in the context of geometric optics when absorption is involved. Either the TM wave condition (i.e., where the magnetic field of the refracted wave is transverse with respect to the wave direction) or the TE wave condition (i.e., where the electric field is transverse with respect to the propagating direction of the wave) may be assumed for the refracted wave in an absorbing medium to locally satisfy the electromagnetic boundary condition in the ray tracing calculation. The wave mode assumed for the refracted wave affects both the reflection and refraction coefficients. As a result, a nonunique solution for these coefficients is derived from the electromagnetic boundary condition. In this study we have identified the appropriate solution for the Fresnel reflection/refraction coefficients in light scattering calculation based on the ray tracing technique. We present the 3 x 2 refraction or transmission matrix that completely accounts for the inhomogeneity of the refracted wave in an absorbing medium. Using the Fresnel coefficients for an absorbing medium, we derive an asymptotic solution in an analytical format for the scattering properties of a general polyhedral particle. Numerical results are presented for hexagonal plates and columns with both preferred and random orientations. The asymptotic theory can produce reasonable accuracy in the phase function calculations in the infrared window region (wavelengths near 10 micron) if the particle size (in diameter) is on the order of 40 micron or larger. However, since strong absorption is assumed in the computation of the single-scattering albedo in the asymptotic theory, the single scattering albedo does not change with variation of the particle size. As a result, the asymptotic theory can lead to substantial errors in the computation of single-scattering albedo for small and moderate particle sizes. However, from comparison of the asymptotic results with the FDTD solution, it is expected that a convergence between the FDTD results and the asymptotic theory results can be reached when the particle size approaches 200 micron. We show that the phase function at side-scattering and backscattering angles is insensitive to particle shape if the random orientation condition is assumed. However, if preferred orientations are assumed for particles, the phase function has a strong dependence on scattering azimuthal angle. The single-scattering albedo also shows very strong dependence on the inclination angle of incident radiation with respect to the rotating axis for the preferred particle orientations.

  19. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  20. Estimation in a discrete tail rate family of recapture sampling models

    NASA Technical Reports Server (NTRS)

    Gupta, Rajan; Lee, Larry D.

    1990-01-01

    In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.

  1. Asymptotic stability estimates near an equilibrium point

    NASA Astrophysics Data System (ADS)

    Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2017-07-01

    We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.

  2. Control and synchronisation of a novel seven-dimensional hyperchaotic system with active control

    NASA Astrophysics Data System (ADS)

    Varan, Metin; Akgul, Akif

    2018-04-01

    In this work, active control method is proposed for controlling and synchronising seven-dimensional (7D) hyperchaotic systems. The seven-dimensional hyperchaotic system is considered for the implementation. Seven-dimensional hyperchaotic system is also investigated via time series, phase portraits and bifurcation diagrams. For understanding the impact of active controllers on global asymptotic stability of synchronisation and control errors, the Lyapunov function is used. Numerical analysis is done to reveal the effectiveness of applied active control method and the results are discussed.

  3. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  4. Adaptive selective relaying in cooperative free-space optical systems over atmospheric turbulence and misalignment fading channels.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2014-06-30

    In this paper, a novel adaptive cooperative protocol with multiple relays using detect-and-forward (DF) over atmospheric turbulence channels with pointing errors is proposed. The adaptive DF cooperative protocol here analyzed is based on the selection of the optical path, source-destination or different source-relay links, with a greater value of fading gain or irradiance, maintaining a high diversity order. Closed-form asymptotic bit error-rate (BER) expressions are obtained for a cooperative free-space optical (FSO) communication system with Nr relays, when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions, following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. A greater robustness for different link distances and pointing errors is corroborated by the obtained results if compared with similar cooperative schemes or equivalent multiple-input multiple-output (MIMO) systems. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results.

  5. Attitude guidance and tracking for spacecraft with two reaction wheels

    NASA Astrophysics Data System (ADS)

    Biggs, James D.; Bai, Yuliang; Henninger, Helen

    2018-04-01

    This paper addresses the guidance and tracking problem for a rigid-spacecraft using two reaction wheels (RWs). The guidance problem is formulated as an optimal control problem on the special orthogonal group SO(3). The optimal motion is solved analytically as a function of time and is used to reduce the original guidance problem to one of computing the minimum of a nonlinear function. A tracking control using two RWs is developed that extends previous singular quaternion stabilisation controls to tracking controls on the rotation group. The controller is proved to locally asymptotically track the generated reference motions using Lyapunov's direct method. Simulations of a 3U CubeSat demonstrate that this tracking control is robust to initial rotation errors and angular velocity errors in the controlled axis. For initial angular velocity errors in the uncontrolled axis and under significant disturbances the control fails to track. However, the singular tracking control is combined with a nano-magnetic torquer which simply damps the angular velocity in the uncontrolled axis and is shown to provide a practical control method for tracking in the presence of disturbances and initial condition errors.

  6. Detecting Multiple Model Components with the Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Protassov, R. S.; van Dyk, D. A.

    2000-05-01

    The likelihood ratio test (LRT) and F-test popularized in astrophysics by Bevington (Data Reduction and Error Analysis in the Physical Sciences ) and Cash (1977, ApJ 228, 939), do not (even asymptotically) adhere to their nominal χ2 and F distributions in many statistical tests commonly used in astrophysics. The many legitimate uses of the LRT (see, e.g., the examples given in Cash (1977)) notwithstanding, it can be impossible to compute the false positive rate of the LRT or related tests such as the F-test. For example, although Cash (1977) did not suggest the LRT for detecting a line profile in a spectral model, it has become common practice despite the lack of certain required mathematical regularity conditions. Contrary to common practice, the nominal distribution of the LRT statistic should not be used in these situations. In this paper, we characterize an important class of problems where the LRT fails, show the non-standard behavior of the test in this setting, and provide a Bayesian alternative to the LRT, i.e., posterior predictive p-values. We emphasize that there are many legitimate uses of the LRT in astrophysics, and even when the LRT is inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). We illustrate this point in our analysis of GRB 970508 that was studied by Piro et al. in ApJ, 514:L73-L77, 1999.

  7. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  8. Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles

    NASA Astrophysics Data System (ADS)

    Kobayashi, Naoki; Yamazaki, Hiroshi

    2018-01-01

    We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.

  9. On formation of the asymptotic spectrum of delayed neutron emitters in measuring the VVER-1000 scram system effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shishkov, L. K., E-mail: slk@vver.kiae.ru; Zizin, M. N., E-mail: zizin_m@mail.ru

    The process of formation of an asymptotic distribution of the neutron flux density in the reactor systems after introducing different negative reactivities is considered. The impact of two factors after the reactivity introduction is evaluated: (1) nonuniformity of perturbation of core properties, on one hand, and (2) a sharp reduction in the density of prompt neutrons, which prevents the appearance of new delayed neutron emitters distributed in accordance with the “new” prompt neutron distribution, on the other hand. The results of calculations show that the errors of measuring the scram system effectiveness using the method of inverse solution of themore » kinetics equation are caused by the fact that, after the negative reactivity insertion, the sources of prompt and delayed neutrons have different spatial distributions. In the case of high negative reactivities, this difference remains while the system still has neutrons, which can be measured.« less

  10. Divergent expansion, Borel summability and three-dimensional Navier-Stokes equation.

    PubMed

    Costin, Ovidiu; Luo, Guo; Tanveer, Saleh

    2008-08-13

    We describe how the Borel summability of a divergent asymptotic expansion can be expanded and applied to nonlinear partial differential equations (PDEs). While Borel summation does not apply for non-analytic initial data, the present approach generates an integral equation (IE) applicable to much more general data. We apply these concepts to the three-dimensional Navier-Stokes (NS) system and show how the IE approach can give rise to local existence proofs. In this approach, the global existence problem in three-dimensional NS systems, for specific initial condition and viscosity, becomes a problem of asymptotics in the variable p (dual to 1/t or some positive power of 1/t). Furthermore, the errors in numerical computations in the associated IE can be controlled rigorously, which is very important for nonlinear PDEs such as NS when solutions are not known to exist globally.Moreover, computation of the solution of the IE over an interval [0,p0] provides sharper control of its p-->infinity behaviour. Preliminary numerical computations give encouraging results.

  11. Arbitrarily small amounts of correlation for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2013-11-15

    As our main result show that in order to achieve the randomness assisted message and entanglement transmission capacities of a finite arbitrarily varying quantum channel it is not necessary that sender and receiver share (asymptotically perfect) common randomness. Rather, it is sufficient that they each have access to an unlimited amount of uses of one part of a correlated bipartite source. This access might be restricted to an arbitrary small (nonzero) fraction per channel use, without changing the main result. We investigate the notion of common randomness. It turns out that this is a very costly resource – generically, itmore » cannot be obtained just by local processing of a bipartite source. This result underlines the importance of our main result. Also, the asymptotic equivalence of the maximal- and average error criterion for classical message transmission over finite arbitrarily varying quantum channels is proven. At last, we prove a simplified symmetrizability condition for finite arbitrarily varying quantum channels.« less

  12. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    PubMed

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  13. Long-time asymptotic solution structure of Camassa-Holm equation subject to an initial condition with non-zero reflection coefficient of the scattering data

    NASA Astrophysics Data System (ADS)

    Chang, Chueh-Hsin; Yu, Ching-Hao; Sheu, Tony Wen-Hann

    2016-10-01

    In this article, we numerically revisit the long-time solution behavior of the Camassa-Holm equation ut - uxxt + 2ux + 3uux = 2uxuxx + uuxxx. The finite difference solution of this integrable equation is sought subject to the newly derived initial condition with Delta-function potential. Our underlying strategy of deriving a numerical phase accurate finite difference scheme in time domain is to reduce the numerical dispersion error through minimization of the derived discrepancy between the numerical and exact modified wavenumbers. Additionally, to achieve the goal of conserving Hamiltonians in the completely integrable equation of current interest, a symplecticity-preserving time-stepping scheme is developed. Based on the solutions computed from the temporally symplecticity-preserving and the spatially wavenumber-preserving schemes, the long-time asymptotic CH solution characters can be accurately depicted in distinct regions of the space-time domain featuring with their own quantitatively very different solution behaviors. We also aim to numerically confirm that in the two transition zones their long-time asymptotics can indeed be described in terms of the theoretically derived Painlevé transcendents. Another attempt of this study is to numerically exhibit a close connection between the presently predicted finite-difference solution and the solution of the Painlevé ordinary differential equation of type II in two different transition zones.

  14. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  15. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  16. Angular-momentum--mass inequality for axisymmetric black holes.

    PubMed

    Dain, Sergio

    2006-03-17

    The inequality square root J

  17. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading-edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (FAS) cycle per grid. Asymptotic convergence rates of the FAS cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  18. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading- edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (F.4S) cycle per grid. Asymptotic convergence rates of the F.4S cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  19. Robust preview control for a class of uncertain discrete-time systems with time-varying delay.

    PubMed

    Li, Li; Liao, Fucheng

    2018-02-01

    This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

    PubMed

    Gilra, Aditya; Gerstner, Wulfram

    2017-11-27

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

  1. Early-Time Solution of the Horizontal Unconfined Aquifer in the Buildup Phase

    NASA Astrophysics Data System (ADS)

    Gravanis, Elias; Akylas, Evangelos

    2017-10-01

    We derive the early-time solution of the Boussinesq equation for the horizontal unconfined aquifer in the buildup phase under constant recharge and zero inflow. The solution is expressed as a power series of a suitable similarity variable, which is constructed so that to satisfy the boundary conditions at both ends of the aquifer, that is, it is a polynomial approximation of the exact solution. The series turns out to be asymptotic and it is regularized by resummation techniques that are used to define divergent series. The outflow rate in this regime is linear in time, and the (dimensionless) coefficient is calculated to eight significant figures. The local error of the series is quantified by its deviation from satisfying the self-similar Boussinesq equation at every point. The local error turns out to be everywhere positive, hence, so is the integrated error, which in turn quantifies the degree of convergence of the series to the exact solution.

  2. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280

  3. Caffeine enhances real-world language processing: evidence from a proofreading task.

    PubMed

    Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A

    2012-03-01

    Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  4. Design of asymptotic estimators: an approach based on neural networks and nonlinear programming.

    PubMed

    Alessandri, Angelo; Cervellera, Cristiano; Sanguineti, Marcello

    2007-01-01

    A methodology to design state estimators for a class of nonlinear continuous-time dynamic systems that is based on neural networks and nonlinear programming is proposed. The estimator has the structure of a Luenberger observer with a linear gain and a parameterized (in general, nonlinear) function, whose argument is an innovation term representing the difference between the current measurement and its prediction. The problem of the estimator design consists in finding the values of the gain and of the parameters that guarantee the asymptotic stability of the estimation error. Toward this end, if a neural network is used to take on this function, the parameters (i.e., the neural weights) are chosen, together with the gain, by constraining the derivative of a quadratic Lyapunov function for the estimation error to be negative definite on a given compact set. It is proved that it is sufficient to impose the negative definiteness of such a derivative only on a suitably dense grid of sampling points. The gain is determined by solving a Lyapunov equation. The neural weights are searched for via nonlinear programming by minimizing a cost penalizing grid-point constraints that are not satisfied. Techniques based on low-discrepancy sequences are applied to deal with a small number of sampling points, and, hence, to reduce the computational burden required to optimize the parameters. Numerical results are reported and comparisons with those obtained by the extended Kalman filter are made.

  5. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  6. Essays in financial economics and econometrics

    NASA Astrophysics Data System (ADS)

    La Spada, Gabriele

    Chapter 1 (my job market paper) asks the following question: Do asset managers reach for yield because of competitive pressures in a low rate environment? I propose a tournament model of money market funds (MMFs) to study this issue. I show that funds with different costs of default respond differently to changes in interest rates, and that it is important to distinguish the role of risk-free rates from that of risk premia. An increase in the risk premium leads funds with lower default costs to increase risk-taking, while funds with higher default costs reduce risk-taking. Without changes in the premium, low risk-free rates reduce risk-taking. My empirical analysis shows that these predictions are consistent with the risk-taking of MMFs during the 2006--2008 period. Chapter 2, co-authored with Fabrizio Lillo and published in Studies in Nonlinear Dynamics and Econometrics (2014), studies the effect of round-off error (or discretization) on stationary Gaussian long-memory process. For large lags, the autocovariance is rescaled by a factor smaller than one, and we compute this factor exactly. Hence, the discretized process has the same Hurst exponent as the underlying one. We show that in presence of round-off error, two common estimators of the Hurst exponent, the local Whittle (LW) estimator and the detrended fluctuation analysis (DFA), are severely negatively biased in finite samples. We derive conditions for consistency and asymptotic normality of the LW estimator applied to discretized processes and compute the asymptotic properties of the DFA for generic long-memory processes that encompass discretized processes. Chapter 3, co-authored with Fabrizio Lillo, studies the effect of round-off error on integrated Gaussian processes with possibly correlated increments. We derive the variance and kurtosis of the realized increment process in the limit of both "small" and "large" round-off errors, and its autocovariance for large lags. We propose novel estimators for the variance and lag-one autocorrelation of the underlying, unobserved increment process. We also show that for fractionally integrated processes, the realized increments have the same Hurst exponent as the underlying ones, but the LW estimator applied to the realized series is severely negatively biased in medium-sized samples.

  7. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  8. Early-time solution of the horizontal unconfined aquifer in the build-up phase

    NASA Astrophysics Data System (ADS)

    Gravanis, Elias; Akylas, Evangelos

    2017-04-01

    The Boussinesq equation is a dynamical equation for the free surface of saturated subsurface flows over an impervious bed. Boussinesq equation is non-linear. The non-linearity comes from the reduction of the dimensionality of the problem: The flow is assumed to be vertically homogeneous, therefore the flow rate through a cross section of the flow is proportional to the free surface height times the hydraulic gradient, which is assumed to be equal to the slope of the free surface (Dupuit approximation). In general, 'vertically' means normally on the bed; combining the Dupuit approximation with the continuity equation leads to the Boussinesq equation. There are very few transient exact solutions. Self- similar solutions have been constructed in the past by various authors. A power series type of solution was derived for a self-similar Boussinesq equation by Barenblatt in 1990. That type of solution has generated a certain amount of literature. For the unconfined flow case for zero recharge rate Boussinesq derived for the horizontal aquifer an exact solution assuming separation of variables. This is actually an exact asymptotic solution of the horizontal aquifer recession phase for late times. The kinematic wave is an interesting solution obtained by dropping the non-linear term in the Boussinesq equation. Although it is an approximate solution, and holds well only for small values of the Henderson and Wooding λ parameter (that is, for steep slopes, high conductivity or small recharge rate), it becomes less and less approximate for smaller values of the parameter, that is, it is asymptotically exact with respect to that parameter. In the present work we consider the case of the unconfined subsurface flow over horizontal bed in the build-up phase under constant recharge rate. This is a case with an infinite Henderson and Wooding parameter, that is, it is the limiting case where the non-linear term is present in the Boussinesq while the linear spatial derivative term goes away. Nonetheless, no analogue of the kinematic wave or the Boussinesq separable solution exists in this case. The late time state of the build-up phase under constant recharge rate is very simply the steady state solution. Our aim is to construct the early time asymptotic solution of this problem. The solution is expressed as a power series of a suitable similarity variable, which is constructed so that to satisfy the boundary conditions at both ends of the aquifer, that is, it is a polynomial approximation of the exact solution. The series turn out to be asymptotic and it is regularized by re-summation techniques which are used to define divergent series. The outflow rate in this regime is linear in time, and the (dimensionless) coefficient is calculated to eight significant figures. The local error of the series is quantified by its deviation from satisfying the self-similar Boussinesq equation at every point. The local error turns out to be everywhere positive, hence, so is the integrated error, which in turn quantifies the degree of convergence of the series to the exact solution.

  9. Softened gravity and the extension of the standard model up to infinite energy

    NASA Astrophysics Data System (ADS)

    Giudice, Gian F.; Isidori, Gino; Salvio, Alberto; Strumia, Alessandro

    2015-02-01

    Attempts to solve naturalness by having the weak scale as the only breaking of classical scale invariance have to deal with two severe difficulties: gravity and the absence of Landau poles. We show that solutions to the first problem require premature modifications of gravity at scales no larger than 1011 GeV, while the second problem calls for many new particles at the weak scale. To build models that fulfill these properties, we classify 4- dimensional Quantum Field Theories that satisfy Total Asymptotic Freedom (TAF): the theory holds up to infinite energy, where all coupling constants flow to zero. We develop a technique to identify such theories and determine their low-energy predictions. Since the Standard Model turns out to be asymptotically free only under the unphysical conditions g 1 = 0, M t = 186 GeV, M τ = 0, M h = 163 GeV, we explore some of its weak-scale extensions that satisfy the requirements for TAF.

  10. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  11. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  12. Control of nonlinear systems with applications to constrained robots and spacecraft attitude stabilization

    NASA Technical Reports Server (NTRS)

    Krishnan, Hariharan

    1993-01-01

    This thesis is organized in two parts. In Part 1, control systems described by a class of nonlinear differential and algebraic equations are introduced. A procedure for local stabilization based on a local state realization is developed. An alternative approach to local stabilization is developed based on a classical linearization of the nonlinear differential-algebraic equations. A theoretical framework is established for solving a tracking problem associated with the differential-algebraic system. First, a simple procedure is developed for the design of a feedback control law which ensures, at least locally, that the tracking error in the closed loop system lies within any given bound if the reference inputs are sufficiently slowly varying. Next, by imposing additional assumptions, a procedure is developed for the design of a feedback control law which ensures that the tracking error in the closed loop system approaches zero exponentially for reference inputs which are not necessarily slowly varying. The control design methodologies are used for simultaneous force and position control in constrained robot systems. The differential-algebraic equations are shown to characterize the slow dynamics of a certain nonlinear control system in nonstandard singularly perturbed form. In Part 2, the attitude stabilization (reorientation) of a rigid spacecraft using only two control torques is considered. First, the case of momentum wheel actuators is considered. The complete spacecraft dynamics are not controllable. However, the spacecraft dynamics are small time locally controllable in a reduced sense. The reduced spacecraft dynamics cannot be asymptotically stabilized using continuous feedback, but a discontinuous feedback control strategy is constructed. Next, the case of gas jet actuators is considered. If the uncontrolled principal axis is not an axis of symmetry, the complete spacecraft dynamics are small time locally controllable. However, the spacecraft attitude cannot be asymptotically stabilized using continuous feedback, but a discontinuous stabilizing feedback control strategy is constructed. If the uncontrolled principal axis is an axis of symmetry, the complete spacecraft dynamics cannot be stabilized. However, the spacecraft dynamics are small time locally controllable in a reduced sense. The reduced spacecraft dynamics cannot be asymptotically stabilized using continuous feedback, but again a discontinuous feedback control strategy is constructed.

  13. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  14. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  15. Multisynchronization of chaotic oscillators via nonlinear observer approach.

    PubMed

    Aguilar-López, Ricardo; Martínez-Guerra, Rafael; Mata-Machuca, Juan L

    2014-01-01

    The goal of this work is to synchronize a class of chaotic oscillators in a master-slave scheme, under different initial conditions, considering several slaves systems. The Chen oscillator is employed as a benchmark model and a nonlinear observer is proposed to reach synchronicity between the master and the slaves' oscillators. The proposed observer contains a proportional and integral form of a bounded function of the synchronization error in order to provide asymptotic synchronization with a satisfactory performance. Numerical experiments were carried out to show the operation of the considered methodology.

  16. Multisynchronization of Chaotic Oscillators via Nonlinear Observer Approach

    PubMed Central

    Aguilar-López, Ricardo; Martínez-Guerra, Rafael; Mata-Machuca, Juan L.

    2014-01-01

    The goal of this work is to synchronize a class of chaotic oscillators in a master-slave scheme, under different initial conditions, considering several slaves systems. The Chen oscillator is employed as a benchmark model and a nonlinear observer is proposed to reach synchronicity between the master and the slaves' oscillators. The proposed observer contains a proportional and integral form of a bounded function of the synchronization error in order to provide asymptotic synchronization with a satisfactory performance. Numerical experiments were carried out to show the operation of the considered methodology. PMID:24578671

  17. A non-asymptotic homogenization theory for periodic electromagnetic structures.

    PubMed

    Tsukerman, Igor; Markel, Vadim A

    2014-08-08

    Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions.

  18. Coherent detection of frequency-hopped quadrature modulations in the presence of jamming. I - QPSK and QASK modulations

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.

    1981-01-01

    This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.

  19. One Step Quantum Key Distribution Based on EPR Entanglement.

    PubMed

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-06-30

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper's attack would introduce at least an error rate of 46.875%. Compared with the "Ping-pong" protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step.

  20. Continuous-variable quantum-key-distribution protocols with a non-Gaussian modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, Univ. Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex

    2011-04-15

    In this paper, we consider continuous-variable quantum-key-distribution (QKD) protocols which use non-Gaussian modulations. These specific modulation schemes are compatible with very efficient error-correction procedures, hence allowing the protocols to outperform previous protocols in terms of achievable range. In their simplest implementation, these protocols are secure for any linear quantum channels (hence against Gaussian attacks). We also show how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.

  1. Security of coherent-state quantum cryptography in the presence of Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heid, Matthias; Luetkenhaus, Norbert

    2007-08-15

    We investigate the security against collective attacks of a continuous variable quantum key distribution scheme in the asymptotic key limit for a realistic setting. The quantum channel connecting the two honest parties is assumed to be lossy and imposes Gaussian noise on the observed quadrature distributions. Secret key rates are given for direct and reverse reconciliation schemes including post-selection in the collective attack scenario. The effect of a nonideal error correction and two-way communication in the classical post-processing step is also taken into account.

  2. Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards

    DTIC Science & Technology

    2017-07-01

    conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style

  3. Testing jumps via false discovery rate control.

    PubMed

    Yen, Yu-Min

    2013-01-01

    Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via the Barndorff-Nielsen and Shephard (BNS) test statistic, and control the FDR with the Benjamini and Hochberg (BH) procedure. We provide asymptotic results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling the FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data.

  4. Verification of floating-point software

    NASA Technical Reports Server (NTRS)

    Hoover, Doug N.

    1990-01-01

    Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.

  5. Observer-based output consensus of a class of heterogeneous multi-agent systems with unmatched disturbances

    NASA Astrophysics Data System (ADS)

    Zhang, Jiancheng; Zhu, Fanglai

    2018-03-01

    In this paper, the output consensus of a class of linear heterogeneous multi-agent systems with unmatched disturbances is considered. Firstly, based on the relative output information among neighboring agents, we propose an asymptotic sliding-mode based consensus control scheme, under which, the output consensus error can converge to zero by removing the disturbances from output channels. Secondly, in order to reach the consensus goal, we design a novel high-order unknown input observer for each agent. It can estimate not only each agent's states and disturbances, but also the disturbances' high-order derivatives, which are required in the control scheme aforementioned above. The observer-based consensus control laws and the convergence analysis of the consensus error dynamics are given. Finally, a simulation example is provided to verify the validity of our methods.

  6. MRAC Revisited: Guaranteed Performance with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmaje

    2010-01-01

    This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.

  7. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Technical Reports Server (NTRS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-01-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  8. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Astrophysics Data System (ADS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-04-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  9. Crack Turning and Arrest Mechanisms for Integral Structure

    NASA Technical Reports Server (NTRS)

    Pettit, Richard; Ingraffea, Anthony

    1999-01-01

    In the course of several years of research efforts to predict crack turning and flapping in aircraft fuselage structures and other problems related to crack turning, the 2nd order maximum tangential stress theory has been identified as the theory most capable of predicting the observed test results. This theory requires knowledge of a material specific characteristic length, and also a computation of the stress intensity factors and the T-stress, or second order term in the asymptotic stress field in the vicinity of the crack tip. A characteristic length, r(sub c), is proposed for ductile materials pertaining to the onset of plastic instability, as opposed to the void spacing theories espoused by previous investigators. For the plane stress case, an approximate estimate of r(sub c), is obtained from the asymptotic field for strain hardening materials given by Hutchinson, Rice and Rosengren (HRR). A previous study using of high order finite element methods to calculate T-stresses by contour integrals resulted in extremely high accuracy values obtained for selected test specimen geometries, and a theoretical error estimation parameter was defined. In the present study, it is shown that a large portion of the error in finite element computations of both K and T are systematic, and can be corrected after the initial solution if the finite element implementation utilizes a similar crack tip discretization scheme for all problems. This scheme is applied for two-dimensional problems to a both a p-version finite element code, showing that sufficiently accurate values of both K(sub I) and T can be obtained with fairly low order elements if correction is used. T-stress correction coefficients are also developed for the singular crack tip rosette utilized in the adaptive mesh finite element code FRANC2D, and shown to reduce the error in the computed T-stress significantly. Stress intensity factor correction was not attempted for FRANC2D because it employs a highly accurate quarter-point scheme to obtain stress intensity factors.

  10. Curve numbers for no-till: field data versus standard tables

    USDA-ARS?s Scientific Manuscript database

    The Curve Number procedure developed by Soil Conservation Service (Now Natural Resources Conservation Service) in the mid-1950s for estimating direct runoff from rainstorms has not been extensively tested in cropping systems under no-till. Analysis of CNs using the frequency matching and asymptotic ...

  11. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  12. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  13. The two errors of using the within-subject standard deviation (WSD) as the standard error of a reliable change index.

    PubMed

    Maassen, Gerard H

    2010-08-01

    In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.

  14. A theory of stationarity and asymptotic approach in dissipative systems

    NASA Astrophysics Data System (ADS)

    Rubel, Michael Thomas

    2007-05-01

    The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.

  15. Localized overlap algorithm for unexpanded dispersion energies

    NASA Astrophysics Data System (ADS)

    Rob, Fazle; Misquitta, Alston J.; Podeszwa, Rafał; Szalewicz, Krzysztof

    2014-03-01

    First-principles-based, linearly scaling algorithm has been developed for calculations of dispersion energies from frequency-dependent density susceptibility (FDDS) functions with account of charge-overlap effects. The transition densities in FDDSs are fitted by a set of auxiliary atom-centered functions. The terms in the dispersion energy expression involving products of such functions are computed using either the unexpanded (exact) formula or from inexpensive asymptotic expansions, depending on the location of these functions relative to the dimer configuration. This approach leads to significant savings of computational resources. In particular, for a dimer consisting of two elongated monomers with 81 atoms each in a head-to-head configuration, the most favorable case for our algorithm, a 43-fold speedup has been achieved while the approximate dispersion energy differs by less than 1% from that computed using the standard unexpanded approach. In contrast, the dispersion energy computed from the distributed asymptotic expansion differs by dozens of percent in the van der Waals minimum region. A further increase of the size of each monomer would result in only small increased costs since all the additional terms would be computed from the asymptotic expansion.

  16. Universal dual amplitudes and asymptotic expansions for gg→ H and H→ γ γ in four dimensions

    NASA Astrophysics Data System (ADS)

    Driencourt-Mangin, Félix; Rodrigo, Germán; Sborlini, Germán F. R.

    2018-03-01

    Though the one-loop amplitudes of the Higgs boson to massless gauge bosons are finite because there is no direct interaction at tree level in the Standard Model, a well-defined regularization scheme is still required for their correct evaluation. We reanalyze these amplitudes in the framework of the four-dimensional unsubtraction and the loop-tree duality (FDU/LTD), and show how a local renormalization solves potential regularization ambiguities. The Higgs boson interactions are also used to illustrate new additional advantages of this formalism. We show that LTD naturally leads to very compact integrand expressions in four space-time dimensions of the one-loop amplitude with virtual electroweak gauge bosons. They exhibit the same functional form as the amplitudes with top quarks and charged scalars, thus opening further possibilities for simplifications in higher-order computations. Another outstanding application is the straightforward implementation of asymptotic expansions by using dual amplitudes. One of the main benefits of the LTD representation is that it is supported in a Euclidean space. This characteristic feature naturally leads to simpler asymptotic expansions.

  17. Adaptive Control Based Harvesting Strategy for a Predator-Prey Dynamical System.

    PubMed

    Sen, Moitri; Simha, Ashutosh; Raha, Soumyendu

    2018-04-23

    This paper deals with designing a harvesting control strategy for a predator-prey dynamical system, with parametric uncertainties and exogenous disturbances. A feedback control law for the harvesting rate of the predator is formulated such that the population dynamics is asymptotically stabilized at a positive operating point, while maintaining a positive, steady state harvesting rate. The hierarchical block strict feedback structure of the dynamics is exploited in designing a backstepping control law, based on Lyapunov theory. In order to account for unknown parameters, an adaptive control strategy has been proposed in which the control law depends on an adaptive variable which tracks the unknown parameter. Further, a switching component has been incorporated to robustify the control performance against bounded disturbances. Proofs have been provided to show that the proposed adaptive control strategy ensures asymptotic stability of the dynamics at a desired operating point, as well as exact parameter learning in the disturbance-free case and learning with bounded error in the disturbance prone case. The dynamics, with uncertainty in the death rate of the predator, subjected to a bounded disturbance has been simulated with the proposed control strategy.

  18. Transient Dissipation and Structural Costs of Physical Information Transduction

    NASA Astrophysics Data System (ADS)

    Boyd, Alexander B.; Mandal, Dibyendu; Riechers, Paul M.; Crutchfield, James P.

    2017-06-01

    A central result that arose in applying information theory to the stochastic thermodynamics of nonlinear dynamical systems is the information-processing second law (IPSL): the physical entropy of the Universe can decrease if compensated by the Shannon-Kolmogorov-Sinai entropy change of appropriate information-carrying degrees of freedom. In particular, the asymptotic-rate IPSL precisely delineates the thermodynamic functioning of autonomous Maxwellian demons and information engines. How do these systems begin to function as engines, Landauer erasers, and error correctors? We identify a minimal, and thus inescapable, transient dissipation of physical information processing, which is not captured by asymptotic rates, but is critical to adaptive thermodynamic processes such as those found in biological systems. A component of transient dissipation, we also identify an implementation-dependent cost that varies from one physical substrate to another for the same information processing task. Applying these results to producing structured patterns from a structureless information reservoir, we show that "retrodictive" generators achieve the minimal costs. The results establish the thermodynamic toll imposed by a physical system's structure as it comes to optimally transduce information.

  19. The effect of normalization of Partial Directed Coherence on the statistical assessment of connectivity patterns: a simulation study.

    PubMed

    Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L

    2013-01-01

    Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.

  20. Code Properties from Holographic Geometries

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Preskill, John

    2017-04-01

    Almheiri, Dong, and Harlow [J. High Energy Phys. 04 (2015) 163., 10.1007/JHEP04(2015)163] proposed a highly illuminating connection between the AdS /CFT holographic correspondence and operator algebra quantum error correction (OAQEC). Here, we explore this connection further. We derive some general results about OAQEC, as well as results that apply specifically to quantum codes that admit a holographic interpretation. We introduce a new quantity called price, which characterizes the support of a protected logical system, and find constraints on the price and the distance for logical subalgebras of quantum codes. We show that holographic codes defined on bulk manifolds with asymptotically negative curvature exhibit uberholography, meaning that a bulk logical algebra can be supported on a boundary region with a fractal structure. We argue that, for holographic codes defined on bulk manifolds with asymptotically flat or positive curvature, the boundary physics must be highly nonlocal, an observation with potential implications for black holes and for quantum gravity in AdS space at distance scales that are small compared to the AdS curvature radius.

  1. Asymptotic scaling properties and estimation of the generalized Hurst exponents in financial data

    NASA Astrophysics Data System (ADS)

    Buonocore, R. J.; Aste, T.; Di Matteo, T.

    2017-04-01

    We propose a method to measure the Hurst exponents of financial time series. The scaling of the absolute moments against the aggregation horizon of real financial processes and of both uniscaling and multiscaling synthetic processes converges asymptotically towards linearity in log-log scale. In light of this we found appropriate a modification of the usual scaling equation via the introduction of a filter function. We devised a measurement procedure which takes into account the presence of the filter function without the need of directly estimating it. We verified that the method is unbiased within the errors by applying it to synthetic time series with known scaling properties. Finally we show an application to empirical financial time series where we fit the measured scaling exponents via a second or a fourth degree polynomial, which, because of theoretical constraints, have respectively only one and two degrees of freedom. We found that on our data set there is not clear preference between the second or fourth degree polynomial. Moreover the study of the filter functions of each time series shows common patterns of convergence depending on the momentum degree.

  2. Asymptotic-preserving Lagrangian approach for modeling anisotropic transport in magnetized plasmas for arbitrary magnetic fields

    NASA Astrophysics Data System (ADS)

    Chacon, Luis; Del-Castillo-Negrete, Diego; Hauck, Cory

    2012-10-01

    Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy between parallel (to the magnetic field) and perpendicular directions (χ/χ˜10^10 in fusion plasmas). Recently, a Lagrangian Green's function approach, developed for the purely parallel transport case,footnotetextD. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011)^,footnotetextD. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, 19, 056112 (2012) has been extended to the anisotropic transport case in the tokamak-ordering limit with constant density.footnotetextL. Chac'on, D. del-Castillo-Negrete, C. Hauck, JCP, submitted (2012) An operator-split algorithm is proposed that allows one to treat Eulerian and Lagrangian components separately. The approach is shown to feature bounded numerical errors for arbitrary χ/χ ratios, which renders it asymptotic-preserving. In this poster, we will present the generalization of the Lagrangian approach to arbitrary magnetic fields. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry.

  3. New perspectives on constant-roll inflation

    NASA Astrophysics Data System (ADS)

    Cicciarella, Francesco; Mabillard, Joel; Pieroni, Mauro

    2018-01-01

    We study constant-roll inflation using the β-function formalism. We show that the constant rate of the inflaton roll is translated into a first order differential equation for the β-function which can be solved easily. The solutions to this equation correspond to the usual constant-roll models. We then construct, by perturbing these exact solutions, more general classes of models that satisfy the constant-roll equation asymptotically. In the case of an asymptotic power law solution, these corrections naturally provide an end to the inflationary phase. Interestingly, while from a theoretical point of view (in particular in terms of the holographic interpretation) these models are intrinsically different from standard slow-roll inflation, they may have phenomenological predictions in good agreement with present cosmological data.

  4. The random fractional matching problem

    NASA Astrophysics Data System (ADS)

    Lucibello, Carlo; Malatesta, Enrico M.; Parisi, Giorgio; Sicuro, Gabriele

    2018-05-01

    We consider two formulations of the random-link fractional matching problem, a relaxed version of the more standard random-link (integer) matching problem. In one formulation, we allow each node to be linked to itself in the optimal matching configuration. In the other one, on the contrary, such a link is forbidden. Both problems have the same asymptotic average optimal cost of the random-link matching problem on the complete graph. Using a replica approach and previous results of Wästlund (2010 Acta Mathematica 204 91–150), we analytically derive the finite-size corrections to the asymptotic optimal cost. We compare our results with numerical simulations and we discuss the main differences between random-link fractional matching problems and the random-link matching problem.

  5. Asymptotic safety of higher derivative quantum gravity non-minimally coupled with a matter system

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Yamada, Masatoshi

    2017-08-01

    We study asymptotic safety of models of the higher derivative quantum gravity with and without matter. The beta functions are derived by utilizing the functional renormalization group, and non-trivial fixed points are found. It turns out that all couplings in gravity sector, namely the cosmological constant, the Newton constant, and the R 2 and R μν 2 coupling constants, are relevant in case of higher derivative pure gravity. For the Higgs-Yukawa model non-minimal coupled with higher derivative gravity, we find a stable fixed point at which the scalar-quartic and the Yukawa coupling constants become relevant. The relevant Yukawa coupling is crucial to realize the finite value of the Yukawa coupling constants in the standard model.

  6. Geological Carbon Sequestration: A New Approach for Near-Surface Assurance Monitoring

    PubMed Central

    Wielopolski, Lucian

    2011-01-01

    There are two distinct objectives in monitoring geological carbon sequestration (GCS): Deep monitoring of the reservoir’s integrity and plume movement and near-surface monitoring (NSM) to ensure public health and the safety of the environment. However, the minimum detection limits of the current instrumentation for NSM is too high for detecting weak signals that are embedded in the background levels of the natural variations, and the data obtained represents point measurements in space and time. A new approach for NSM, based on gamma-ray spectroscopy induced by inelastic neutron scatterings (INS), offers novel and unique characteristics providing the following: (1) High sensitivity with a reducible error of measurement and detection limits, and, (2) temporal- and spatial-integration of carbon in soil that results from underground CO2 seepage. Preliminary field results validated this approach showing carbon suppression of 14% in the first year and 7% in the second year. In addition the temporal behavior of the error propagation is presented and it is shown that for a signal at the level of the minimum detection level the error asymptotically approaches 47%. PMID:21556180

  7. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  8. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  9. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  10. Exploration and extension of an improved Riemann track fitting algorithm

    NASA Astrophysics Data System (ADS)

    Strandlie, A.; Frühwirth, R.

    2017-09-01

    Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.

  11. Development of a decentralized multi-axis synchronous control approach for real-time networks.

    PubMed

    Xu, Xiong; Gu, Guo-Ying; Xiong, Zhenhua; Sheng, Xinjun; Zhu, Xiangyang

    2017-05-01

    The message scheduling and the network-induced delays of real-time networks, together with the different inertias and disturbances in different axes, make the synchronous control of the real-time network-based systems quite challenging. To address this challenge, a decentralized multi-axis synchronous control approach is developed in this paper. Due to the limitations of message scheduling and network bandwidth, error of the position synchronization is firstly defined in the proposed control approach as a subset of preceding-axis pairs. Then, a motion message estimator is designed to reduce the effect of network delays. It is proven that position and synchronization errors asymptotically converge to zero in the proposed controller with the delay compensation. Finally, simulation and experimental results show that the developed control approach can achieve the good position synchronization performance for the multi-axis motion over the real-time network. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    PubMed

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  13. Capacity estimation and verification of quantum channels with arbitrarily correlated errors.

    PubMed

    Pfister, Corsin; Rol, M Adriaan; Mantri, Atul; Tomamichel, Marco; Wehner, Stephanie

    2018-01-02

    The central figure of merit for quantum memories and quantum communication devices is their capacity to store and transmit quantum information. Here, we present a protocol that estimates a lower bound on a channel's quantum capacity, even when there are arbitrarily correlated errors. One application of these protocols is to test the performance of quantum repeaters for transmitting quantum information. Our protocol is easy to implement and comes in two versions. The first estimates the one-shot quantum capacity by preparing and measuring in two different bases, where all involved qubits are used as test qubits. The second verifies on-the-fly that a channel's one-shot quantum capacity exceeds a minimal tolerated value while storing or communicating data. We discuss the performance using simple examples, such as the dephasing channel for which our method is asymptotically optimal. Finally, we apply our method to a superconducting qubit in experiment.

  14. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data

    PubMed Central

    Nevo, Daniel; Zucker, David M.; Tamimi, Rulla M.; Wang, Molin

    2017-01-01

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps–clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses’ Health Study to demonstrate the utility of our method. PMID:27558651

  15. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  16. RAMP: A fault tolerant distributed microcomputer structure for aircraft navigation and control

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.

    1980-01-01

    RAMP consists of distributed sets of parallel computers partioned on the basis of software and packaging constraints. To minimize hardware and software complexity, the processors operate asynchronously. It was shown that through the design of asymptotically stable control laws, data errors due to the asynchronism were minimized. It was further shown that by designing control laws with this property and making minor hardware modifications to the RAMP modules, the system became inherently tolerant to intermittent faults. A laboratory version of RAMP was constructed and is described in the paper along with the experimental results.

  17. Breakup effects on alpha spectroscopic factors of 16O

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Basu, C.; Sugathan, P.; Jhinghan, A.; Behera, B. R.; Saneesh, N.; Kaur, G.; Thakur, M.; Mahajan, R.; Dubey, R.; Mitra, A. K.

    2017-01-01

    The triton angular distribution for the 12C(7Li,t)16O* reaction is measured at 20 MeV, populating discrete states of 16O. Continuum discretized coupled reaction channel calculations are used to to extract the alpha spectroscopic properties of 16O states instead of the distorted wave born approximation theory to include the effects of breakup on the transfer process. The alpha reduced width, spectroscopic factors and the asymptotic normalization constant (ANC) of 16O states are extracted. The error in the spectroscopic factor is about 35% and in that of the ANC about 27%.

  18. A non-asymptotic homogenization theory for periodic electromagnetic structures

    PubMed Central

    Tsukerman, Igor; Markel, Vadim A.

    2014-01-01

    Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions. PMID:25104912

  19. Improving security of the ping-pong protocol

    NASA Astrophysics Data System (ADS)

    Zawadzki, Piotr

    2013-01-01

    A security layer for the asymptotically secure ping-pong protocol is proposed and analyzed in the paper. The operation of the improvement exploits inevitable errors introduced by the eavesdropping in the control and message modes. Its role is similar to the privacy amplification algorithms known from the quantum key distribution schemes. Messages are processed in blocks which guarantees that an eavesdropper is faced with a computationally infeasible problem as long as the system parameters are within reasonable limits. The introduced additional information preprocessing does not require quantum memory registers and confidential communication is possible without prior key agreement or some shared secret.

  20. One Step Quantum Key Distribution Based on EPR Entanglement

    PubMed Central

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-01-01

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper’s attack would introduce at least an error rate of 46.875%. Compared with the “Ping-pong” protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step. PMID:27357865

  1. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  2. Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic

    ERIC Educational Resources Information Center

    Satorra, Albert; Bentler, Peter M.

    2010-01-01

    A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…

  3. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    NASA Astrophysics Data System (ADS)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  4. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  5. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  6. Structure factors for tunneling ionization rates of molecules: General Hartree-Fock-based integral representation

    NASA Astrophysics Data System (ADS)

    Madsen, Lars Bojer; Jensen, Frank; Dnestryan, Andrey I.; Tolstikhin, Oleg I.

    2017-07-01

    In the leading-order approximation of the weak-field asymptotic theory (WFAT), the dependence of the tunneling ionization rate of a molecule in an electric field on its orientation with respect to the field is determined by the structure factor of the ionizing molecular orbital. The WFAT yields an expression for the structure factor in terms of a local property of the orbital in the asymptotic region. However, in general quantum chemistry approaches molecular orbitals are expanded in a Gaussian basis which does not reproduce their asymptotic behavior correctly. This hinders the application of the WFAT to polyatomic molecules, which are attracting increasing interest in strong-field physics. Recently, an integral-equation approach to the WFAT for tunneling ionization of one electron from an arbitrary potential has been developed. The structure factor is expressed in an integral form as a matrix element involving the ionizing orbital. The integral is not sensitive to the asymptotic behavior of the orbital, which resolves the difficulty mentioned above. Here, we extend the integral representation for the structure factor to many-electron systems treated within the Hartree-Fock method and show how it can be implemented on the basis of standard quantum chemistry software packages. We validate the methodology by considering noble-gas atoms and the CO molecule, for which accurate structure factors exist in the literature. We also present benchmark results for CO2 and for NH3 in the pyramidal and planar geometries.

  7. Otto engine beyond its standard quantum limit.

    PubMed

    Leggio, Bruno; Antezza, Mauro

    2016-02-01

    We propose a quantum Otto cycle based on the properties of a two-level system in a realistic out-of-thermal-equilibrium electromagnetic field acting as its sole reservoir. This steady configuration is produced without the need of active control over the state of the environment, which is a noncoherent thermal radiation, sustained only by external heat supplied to macroscopic objects. Remarkably, even for nonideal finite-time transformations, it largely over-performs the standard ideal Otto cycle and asymptotically achieves unit efficiency at finite power.

  8. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  9. On the assessment of the added value of new predictive biomarkers.

    PubMed

    Chen, Weijie; Samuelson, Frank W; Gallas, Brandon D; Kang, Le; Sahiner, Berkman; Petrick, Nicholas

    2013-07-29

    The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.

  10. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  11. Hopping in the Crowd to Unveil Network Topology.

    PubMed

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-13

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  12. Hopping in the Crowd to Unveil Network Topology

    NASA Astrophysics Data System (ADS)

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-01

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  13. Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2010-01-01

    In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…

  14. Nonlinear adaptive formation control for a class of autonomous holonomic planetary exploration rovers

    NASA Astrophysics Data System (ADS)

    Ganji, Farid

    This dissertation presents novel nonlinear adaptive formation controllers for a heterogeneous group of holonomic planetary exploration rovers navigating over flat terrains with unknown soil types and surface conditions. A leader-follower formation control architecture is employed. In the first part, using a point-mass model for robots and a Coulomb-viscous friction model for terrain resistance, direct adaptive control laws and a formation speed-adaptation strategy are developed for formation navigation over unknown and changing terrain in the presence of actuator saturation. On-line estimates of terrain frictional parameters compensate for unknown terrain resistance and its variations. In saturation events over difficult terrain, the formation speed is reduced based on the speed of the slowest saturated robot, using internal fleet communication and a speed-adaptation strategy, so that the formation error stays bounded and small. A formal proof for asymptotic stability of the formation system in non-saturated conditions is given. The performance of robot controllers are verified using a modular 3-robot formation simulator. Simulations show that the formation errors reduce to zero asymptotically under non-saturated conditions as is guaranteed by the theoretical proof. In the second part, the proposed adaptive control methodology is extended for formation control of a class of omnidirectional rovers with three independently-driven universal holonomic rigid wheels, where the rovers' rigid-body dynamics, drive-system electromechanical characteristics, and wheel-ground interaction mechanics are incorporated. Holonomic rovers have the ability to move simultaneously and independently in translation and rotation, rendering great maneuverability and agility, which makes them suitable for formation navigation. Novel nonlinear adaptive control laws are designed for the input voltages of the three wheel-drive motors. The motion resistance, which is due to the sinkage of rover wheels in soft planetary terrain, is modeled using classical terramechanics theory. The unknown system parameters for adaptive estimation pertain to the rolling resistance forces and scrubbing resistance torques at the wheel-terrain interfaces. Novel terramechanical formulas for terrain resistance forces and torques are derived via considering the universal holonomic wheels as rigid toroidal wheels moving forward and/or sideways as well as turning on soft ground. The asymptotic stability of the formation control system is rigorously proved using Lyapunov's direct method.

  15. Disturbance accommodating control design for wind turbines using solvability conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Na; Wright, Alan D.; Balas, Mark J.

    In this study, solvability conditions for disturbance accommodating control (DAC) have been discussed and applied on wind turbine controller design in above-rated wind speed to regulate rotor speed and to mitigate turbine structural loads. DAC incorporates a predetermined waveform model and uses it as part of the state-space formulation, which is known as the internal model principle to reduce or minimize the wind disturbance effects on the outputs of the wind turbine. An asymptotically stabilizing DAC controller with disturbance impact on the wind turbine being totally canceled out can be found if certain conditions are fulfilled. Designing a rotor speedmore » regulation controller without steady-state error is important for applying linear control methodology such as DAC on wind turbines. Therefore, solvability conditions of DAC without steady-state error are attractive and can be taken as examples when designing a multitask turbine controller. DAC controllers solved via Moore-Penrose Pseudoinverse and the Kronecker product are discussed, and solvability conditions of using them are given. Additionally, a new solvability condition based on inverting the feed-through D term is proposed for the sake of reducing computational burden in the Kronecker product. Applications of designing collective pitch and independent pitch controllers based on DAC are presented. Recommendations of designing a DAC-based wind turbine controller are given. A DAC controller motivated by the proposed solvability condition that utilizes the inverse of feed-through D term is developed to mitigate the blade flapwise once-per-revolution bending moment together with a standard proportional integral controller in the control loop to assist rotor speed regulation. Simulation studies verify the discussed solvability conditions of DAC and show the effectiveness of the proposed DAC control design methodology.« less

  16. Comparison of Two Methods for Calculating the Frictional Properties of Articular Cartilage Using a Simple Pendulum and Intact Mouse Knee Joints

    PubMed Central

    Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.

    2009-01-01

    In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680

  17. B97-3c: A revised low-cost variant of the B97-D density functional method

    NASA Astrophysics Data System (ADS)

    Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas; Grimme, Stefan

    2018-02-01

    A revised version of the well-established B97-D density functional approximation with general applicability for chemical properties of large systems is proposed. Like B97-D, it is based on Becke's power-series ansatz from 1997 and is explicitly parametrized by including the standard D3 semi-classical dispersion correction. The orbitals are expanded in a modified valence triple-zeta Gaussian basis set, which is available for all elements up to Rn. Remaining basis set errors are mostly absorbed in the modified B97 parametrization, while an established atom-pairwise short-range potential is applied to correct for the systematically too long bonds of main group elements which are typical for most semi-local density functionals. The new composite scheme (termed B97-3c) completes the hierarchy of "low-cost" electronic structure methods, which are all mainly free of basis set superposition error and account for most interactions in a physically sound and asymptotically correct manner. B97-3c yields excellent molecular and condensed phase geometries, similar to most hybrid functionals evaluated in a larger basis set expansion. Results on the comprehensive GMTKN55 energy database demonstrate its good performance for main group thermochemistry, kinetics, and non-covalent interactions, when compared to functionals of the same class. This also transfers to metal-organic reactions, which is a major area of applicability for semi-local functionals. B97-3c can be routinely applied to hundreds of atoms on a single processor and we suggest it as a robust computational tool, in particular, for more strongly correlated systems where our previously published "3c" schemes might be problematic.

  18. Disturbance accommodating control design for wind turbines using solvability conditions

    DOE PAGES

    Wang, Na; Wright, Alan D.; Balas, Mark J.

    2017-02-07

    In this study, solvability conditions for disturbance accommodating control (DAC) have been discussed and applied on wind turbine controller design in above-rated wind speed to regulate rotor speed and to mitigate turbine structural loads. DAC incorporates a predetermined waveform model and uses it as part of the state-space formulation, which is known as the internal model principle to reduce or minimize the wind disturbance effects on the outputs of the wind turbine. An asymptotically stabilizing DAC controller with disturbance impact on the wind turbine being totally canceled out can be found if certain conditions are fulfilled. Designing a rotor speedmore » regulation controller without steady-state error is important for applying linear control methodology such as DAC on wind turbines. Therefore, solvability conditions of DAC without steady-state error are attractive and can be taken as examples when designing a multitask turbine controller. DAC controllers solved via Moore-Penrose Pseudoinverse and the Kronecker product are discussed, and solvability conditions of using them are given. Additionally, a new solvability condition based on inverting the feed-through D term is proposed for the sake of reducing computational burden in the Kronecker product. Applications of designing collective pitch and independent pitch controllers based on DAC are presented. Recommendations of designing a DAC-based wind turbine controller are given. A DAC controller motivated by the proposed solvability condition that utilizes the inverse of feed-through D term is developed to mitigate the blade flapwise once-per-revolution bending moment together with a standard proportional integral controller in the control loop to assist rotor speed regulation. Simulation studies verify the discussed solvability conditions of DAC and show the effectiveness of the proposed DAC control design methodology.« less

  19. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

    PubMed

    Donoho, David; Jin, Jiashun

    2008-09-30

    In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.

  20. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

    PubMed Central

    Donoho, David; Jin, Jiashun

    2008-01-01

    In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365

  1. The Calibration of Gloss Reference Standards

    NASA Astrophysics Data System (ADS)

    Budde, W.

    1980-04-01

    In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.

  2. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  3. Asymptotic density and effective negligibility

    NASA Astrophysics Data System (ADS)

    Astor, Eric P.

    In this thesis, we join the study of asymptotic computability, a project attempting to capture the idea that an algorithm might work correctly in all but a vanishing fraction of cases. In collaboration with Hirschfeldt and Jockusch, broadening the original investigation of Jockusch and Schupp, we introduce dense computation, the weakest notion of asymptotic computability (requiring only that the correct answer is produced on a set of density 1), and effective dense computation, where every computation halts with either the correct answer or (on a set of density 0) a symbol denoting uncertainty. A few results make more precise the relationship between these notions and work already done with Jockusch and Schupp's original definitions of coarse and generic computability. For all four types of asymptotic computation, including generic computation, we demonstrate that non-trivial upper cones have measure 0, building on recent work of Hirschfeldt, Jockusch, Kuyper, and Schupp in which they establish this for coarse computation. Their result transfers to yield a minimal pair for relative coarse computation; we generalize their method and extract a similar result for relative dense computation (and thus for its corresponding reducibility). However, all of these notions of near-computation treat a set as negligible iff it has asymptotic density 0. Noting that this definition is not computably invariant, this produces some failures of intuition and a break with standard expectations in computability theory. For instance, as shown by Hamkins and Miasnikov, the halting problem is (in some formulations) effectively densely computable, even in polynomial time---yet this result appears fragile, as indicated by Rybalov. In independent work, we respond to this by strengthening the approach of Jockusch and Schupp to avoid such phenomena; specifically, we introduce a new notion of intrinsic asymptotic density, invariant under computable permutation, with rich relations to both randomness and classical computability theory. For instance, we prove that the stochasticities corresponding to permutation randomness and injection randomness coincide, and identify said stochasticity as intrinsic density 1/2. We then define sets of intrinsic density 0 to be effectively negligible, and classify this as a new immunity property, determining its position in the standard hierarchy from immune to cohesive for both general and Delta02 sets. We further characterize the Turing degrees of effectively negligible sets as those which are either high (a' ≥T 0") or compute a DNC (diagonally non-computable) function. In fact, this result holds over RCA0, demonstrating the reverse-mathematical equivalence of the principles ID0 and DOM \\sext DNR. . Replacing Jockusch and Schupp's negligibility (density 0) by effective negligibility (intrinsic density 0), we then obtain new notions of intrinsically dense computation. Finally, we generalize Rice's Theorem to all forms of intrinsic dense computation, showing that no set that is 1-equivalent to a non-trivial index set is intrinsically densely computable; in particular, in contrast to ordinary dense computation, we see that the halting problem cannot be intrinsically densely computable.

  4. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  5. A Luenberger observer for reaction-diffusion models with front position data

    NASA Astrophysics Data System (ADS)

    Collin, Annabelle; Chapelle, Dominique; Moireau, Philippe

    2015-11-01

    We propose a Luenberger observer for reaction-diffusion models with propagating front features, and for data associated with the location of the front over time. Such models are considered in various application fields, such as electrophysiology, wild-land fire propagation and tumor growth modeling. Drawing our inspiration from image processing methods, we start by proposing an observer for the eikonal-curvature equation that can be derived from the reaction-diffusion model by an asymptotic expansion. We then carry over this observer to the underlying reaction-diffusion equation by an ;inverse asymptotic analysis;, and we show that the associated correction in the dynamics has a stabilizing effect for the linearized estimation error. We also discuss the extension to joint state-parameter estimation by using the earlier-proposed ROUKF strategy. We then illustrate and assess our proposed observer method with test problems pertaining to electrophysiology modeling, including with a realistic model of cardiac atria. Our numerical trials show that state estimation is directly very effective with the proposed Luenberger observer, while specific strategies are needed to accurately perform parameter estimation - as is usual with Kalman filtering used in a nonlinear setting - and we demonstrate two such successful strategies.

  6. On-line Gibbs learning. II. Application to perceptron and multilayer networks

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Sompolinsky, H.

    1998-08-01

    In the preceding paper (``On-line Gibbs Learning. I. General Theory'') we have presented the on-line Gibbs algorithm (OLGA) and studied analytically its asymptotic convergence. In this paper we apply OLGA to on-line supervised learning in several network architectures: a single-layer perceptron, two-layer committee machine, and a winner-takes-all (WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a realizable perceptron rule, a perceptron rule corrupted by output and input noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. The WTA network is studied numerically for the case of a realizable rule. The asymptotic results reported in this paper agree with the predictions of the general theory of OLGA presented in paper I. In all the studied cases, OLGA converges to a set of weights that minimizes the generalization error. When the learning rate is chosen as a power law with an optimal power, OLGA converges with a power law that is the same as that of batch learning.

  7. Standard Errors of Equating for the Percentile Rank-Based Equipercentile Equating with Log-Linear Presmoothing

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2009-01-01

    Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…

  8. On the structure and applications of the Bondi-Metzner-Sachs group

    NASA Astrophysics Data System (ADS)

    Alessio, Francesco; Esposito, Giampiero

    This work is a pedagogical review dedicated to a modern description of the Bondi-Metzner-Sachs (BMS) group. Minkowski space-time has an interesting and useful group of isometries, but, for a generic space-time, the isometry group is simply the identity and hence provides no significant informations. Yet symmetry groups have important role to play in physics; in particular, the Poincaré group describing the isometries of Minkowski space-time plays a role in the standard definitions of energy-momentum and angular-momentum. For this reason alone it would seem to be important to look for a generalization of the concept of isometry group that can apply in a useful way to suitable curved space-times. The curved space-times that will be taken into account are the ones that suitably approach, at infinity, Minkowski space-time. In particular we will focus on asymptotically flat space-times. In this work, the concept of asymptotic symmetry group of those space-times will be studied. In the first two sections we derive the asymptotic group following the classical approach which was basically developed by Bondi, van den Burg, Metzner and Sachs. This is essentially the group of transformations between coordinate systems of a certain type in asymptotically flat space-times. In the third section the conformal method and the notion of “asymptotic simplicity” are introduced, following mainly the works of Penrose. This section prepares us for another derivation of the BMS group which will involve the conformal structure, and is thus more geometrical and fundamental. In the subsequent sections we discuss the properties of the BMS group, e.g. its algebra and the possibility to obtain as its subgroup the Poincaré group, as we may expect. The paper ends with a review of the BMS invariance properties of classical gravitational scattering discovered by Strominger, that are finding application to black hole physics and quantum gravity in the literature.

  9. Feedback attitude sliding mode regulation control of spacecraft using arm motion

    NASA Astrophysics Data System (ADS)

    Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu

    2013-09-01

    The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.

  10. Modification of the Sandwich Estimator in Generalized Estimating Equations with Correlated Binary Outcomes in Rare Event and Small Sample Settings

    PubMed Central

    Rogers, Paul; Stoner, Julie

    2016-01-01

    Regression models for correlated binary outcomes are commonly fit using a Generalized Estimating Equations (GEE) methodology. GEE uses the Liang and Zeger sandwich estimator to produce unbiased standard error estimators for regression coefficients in large sample settings even when the covariance structure is misspecified. The sandwich estimator performs optimally in balanced designs when the number of participants is large, and there are few repeated measurements. The sandwich estimator is not without drawbacks; its asymptotic properties do not hold in small sample settings. In these situations, the sandwich estimator is biased downwards, underestimating the variances. In this project, a modified form for the sandwich estimator is proposed to correct this deficiency. The performance of this new sandwich estimator is compared to the traditional Liang and Zeger estimator as well as alternative forms proposed by Morel, Pan and Mancl and DeRouen. The performance of each estimator was assessed with 95% coverage probabilities for the regression coefficient estimators using simulated data under various combinations of sample sizes and outcome prevalence values with an Independence (IND), Autoregressive (AR) and Compound Symmetry (CS) correlation structure. This research is motivated by investigations involving rare-event outcomes in aviation data. PMID:26998504

  11. SEMIPARAMETRIC EFFICIENT ESTIMATION FOR SHARED-FRAILTY MODELS WITH DOUBLY-CENSORED CLUSTERED DATA

    PubMed Central

    Wang, Jane-Ling

    2018-01-01

    In this paper, we investigate frailty models for clustered survival data that are subject to both left- and right-censoring, termed “doubly-censored data”. This model extends current survival literature by broadening the application of frailty models from right-censoring to a more complicated situation with additional left censoring. Our approach is motivated by a recent Hepatitis B study where the sample consists of families. We adopt a likelihood approach that aims at the nonparametric maximum likelihood estimators (NPMLE). A new algorithm is proposed, which not only works well for clustered data but also improve over existing algorithm for independent and doubly-censored data, a special case when the frailty variable is a constant equal to one. This special case is well known to be a computational challenge due to the left censoring feature of the data. The new algorithm not only resolves this challenge but also accommodate the additional frailty variable effectively. Asymptotic properties of the NPMLE are established along with semi-parametric efficiency of the NPMLE for the finite-dimensional parameters. The consistency of Bootstrap estimators for the standard errors of the NPMLE is also discussed. We conducted some simulations to illustrate the numerical performance and robustness of the proposed algorithm, which is also applied to the Hepatitis B data. PMID:29527068

  12. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  13. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  14. On the asymptotically Poincaré-Einstein 4-manifolds with harmonic curvature

    NASA Astrophysics Data System (ADS)

    Hu, Xue

    2018-06-01

    In this paper, we discuss the mass aspect tensor and the rigidity of an asymptotically Poincaré-Einstein (APE) 4-manifold with harmonic curvature. We prove that the trace-free part of the mass aspect tensor of an APE 4-manifold with harmonic curvature and normalized Einstein conformal infinity is zero. As to the rigidity, we first show that a complete noncompact Riemannian 4-manifold with harmonic curvature and positive Yamabe constant as well as a L2-pinching condition is Einstein. As an application, we then obtain that an APE 4-manifold with harmonic curvature and positive Yamabe constant is isometric to the hyperbolic space provided that the L2-norm of the traceless Ricci tensor or the Weyl tensor is small enough and the conformal infinity is a standard round 3-sphere.

  15. An asymptotic analysis of the logrank test.

    PubMed

    Strawderman, R L

    1997-01-01

    Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.

  16. Microscopic entropy of the three-dimensional rotating black hole of Bergshoeff-Hohm-Townsend massive gravity

    NASA Astrophysics Data System (ADS)

    Giribet, Gaston; Oliva, Julio; Tempo, David; Troncoso, Ricardo

    2009-12-01

    Asymptotically anti-de Sitter rotating black holes for the Bergshoeff-Hohm-Townsend massive gravity theory in three dimensions are considered. In the special case when the theory admits a unique maximally symmetric solution, apart from the mass and the angular momentum, the black hole is described by an independent “gravitational hair” parameter, which provides a negative lower bound for the mass. This bound is saturated at the extremal case, and since the temperature and the semiclassical entropy vanish, it is naturally regarded as the ground state. The absence of a global charge associated with the gravitational hair parameter reflects itself through the first law of thermodynamics in the fact that the variation of this parameter can be consistently reabsorbed by a shift of the global charges, giving further support to consider the extremal case as the ground state. The rotating black hole fits within relaxed asymptotic conditions as compared with the ones of Brown and Henneaux, such that they are invariant under the standard asymptotic symmetries spanned by two copies of the Virasoro generators, and the algebra of the conserved charges acquires a central extension. Then it is shown that Strominger’s holographic computation for general relativity can also be extended to the Bergshoeff-Hohm-Townsend theory; i.e., assuming that the quantum theory could be consistently described by a dual conformal field theory at the boundary, the black hole entropy can be microscopically computed from the asymptotic growth of the number of states according to Cardy’s formula, in exact agreement with the semiclassical result.

  17. A Note on Standard Deviation and Standard Error

    ERIC Educational Resources Information Center

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  18. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    PubMed

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Exact exchange plane-wave-pseudopotential calculations for slabs: Extending the width of the vacuum

    NASA Astrophysics Data System (ADS)

    Engel, Eberhard

    2018-04-01

    Standard plane-wave pseudopotential (PWPP) calculations for slabs such as graphene become extremely demanding, as soon as the exact exchange (EXX) of density functional theory is applied. Even if the Krieger-Li-Iafrate (KLI) approximation for the EXX potential is utilized, such EXX-PWPP calculations suffer from the fact that an accurate representation of the occupied states throughout the complete vacuum between the replicas of the slab is required. In this contribution, a robust and efficient extension scheme for the PWPP states is introduced, which ensures the correct exponential decay of the slab states in the vacuum for standard cutoff energies and therefore facilitates EXX-PWPP calculations for very wide vacua and rather thick slabs. Using this scheme, it is explicitly verified that the Slater component of the EXX/KLI potential decays as -1 /z over an extended region sufficiently far from the surface (assumed to be perpendicular to the z direction) and from the middle of the vacuum, thus reproducing the asymptotic behavior of the exact EXX potential of a single slab. The calculations also reveal that the orbital-shift component of the EXX/KLI potential is quite sizable in the asymptotic region. In spite of the long-range exchange potential, the replicas of the slab decouple rather quickly with increasing width of the vacuum. Relying on the identity of the work function with the Fermi energy obtained with a suitably normalized total potential, the present EXX/KLI calculations predict work functions for both graphene and the Si(111) surface which are substantially larger than the corresponding experimental data. Together with the size of the orbital-shift potential in the asymptotic region, the very large EXX/KLI work functions indicate a failure of the KLI approximation for nonmetallic slabs.

  20. The Use of Asymptotic Functions for Determining Empirical Values of CN Parameter in Selected Catchments of Variable Land Cover

    NASA Astrophysics Data System (ADS)

    Wałęga, Andrzej; Młyński, Dariusz; Wachulec, Katarzyna

    2017-12-01

    The aim of the study was to assess the applicability of asymptotic functions for determining the value of CN parameter as a function of precipitation depth in mountain and upland catchments. The analyses were carried out in two catchments: the Rudawa, left tributary of the Vistula, and the Kamienica, right tributary of the Dunajec. The input material included data on precipitation and flows for a multi-year period 1980-2012, obtained from IMGW PIB in Warsaw. Two models were used to determine empirical values of CNobs parameter as a function of precipitation depth: standard Hawkins model and 2-CN model allowing for a heterogeneous nature of a catchment area. The study analyses confirmed that asymptotic functions properly described P-CNobs relationship for the entire range of precipitation variability. In the case of high rainfalls, CNobs remained above or below the commonly accepted average antecedent moisture conditions AMCII. The study calculations indicated that the runoff amount calculated according to the original SCS-CN method might be underestimated, and this could adversely affect the values of design flows required for the design of hydraulic engineering projects. In catchments with heterogeneous land cover, the results of CNobs were more accurate when 2-CN model was used instead of the standard Hawkins model. 2-CN model is more precise in accounting for differences in runoff formation depending on retention capacity of the substrate. It was also demonstrated that the commonly accepted initial abstraction coefficient λ = 0.20 yielded too big initial loss of precipitation in the analyzed catchments and, therefore, the computed direct runoff was underestimated. The best results were obtained for λ = 0.05.

  1. Conditional Standard Errors, Reliability and Decision Consistency of Performance Levels Using Polytomous IRT.

    ERIC Educational Resources Information Center

    Wang, Tianyou; And Others

    M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…

  2. Reconstruction of the 1997/1998 El Nino from TOPEX/POSEIDON and TOGA/TAO Data Using a Massively Parallel Pacific-Ocean Model and Ensemble Kalman Filter

    NASA Technical Reports Server (NTRS)

    Keppenne, C. L.; Rienecker, M.; Borovikov, A. Y.

    1999-01-01

    Two massively parallel data assimilation systems in which the model forecast-error covariances are estimated from the distribution of an ensemble of model integrations are applied to the assimilation of 97-98 TOPEX/POSEIDON altimetry and TOGA/TAO temperature data into a Pacific basin version the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. in the first system, ensemble of model runs forced by an ensemble of atmospheric model simulations is used to calculate asymptotic error statistics. The data assimilation then occurs in the reduced phase space spanned by the corresponding leading empirical orthogonal functions. The second system is an ensemble Kalman filter in which new error statistics are computed during each assimilation cycle from the time-dependent ensemble distribution. The data assimilation experiments are conducted on NSIPP's 512-processor CRAY T3E. The two data assimilation systems are validated by withholding part of the data and quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The pros and cons of each system are discussed.

  3. Improving the Numerical Stability of Fast Matrix Multiplication

    DOE PAGES

    Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...

    2016-10-04

    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less

  4. PlanetPack: A radial-velocity time-series analysis tool facilitating exoplanets detection, characterization, and dynamical simulations

    NASA Astrophysics Data System (ADS)

    Baluev, Roman V.

    2013-08-01

    We present PlanetPack, a new software tool that we developed to facilitate and standardize the advanced analysis of radial velocity (RV) data for the goal of exoplanets detection, characterization, and basic dynamical N-body simulations. PlanetPack is a command-line interpreter, that can run either in an interactive mode or in a batch mode of automatic script interpretation. Its major abilities include: (i) advanced RV curve fitting with the proper maximum-likelihood treatment of unknown RV jitter; (ii) user-friendly multi-Keplerian as well as Newtonian N-body RV fits; (iii) use of more efficient maximum-likelihood periodograms that involve the full multi-planet fitting (sometimes called as “residual” or “recursive” periodograms); (iv) easily calculatable parametric 2D likelihood function level contours, reflecting the asymptotic confidence regions; (v) fitting under some useful functional constraints is user-friendly; (vi) basic tasks of short- and long-term planetary dynamical simulation using a fast Everhart-type integrator based on Gauss-Legendre spacings; (vii) fitting the data with red noise (auto-correlated errors); (viii) various analytical and numerical methods for the tasks of determining the statistical significance. It is planned that further functionality may be added to PlanetPack in the future. During the development of this software, a lot of effort was made to improve the calculational speed, especially for CPU-demanding tasks. PlanetPack was written in pure C++ (standard of 1998/2003), and is expected to be compilable and useable on a wide range of platforms.

  5. Dealing with the time-varying parameter problem of robot manipulators performing path tracking tasks

    NASA Technical Reports Server (NTRS)

    Song, Y. D.; Middleton, R. H.

    1992-01-01

    Many robotic applications involve time-varying payloads during the operation of the robot. It is therefore of interest to consider control schemes that deal with time-varying parameters. Using the properties of the element by element (or Hadarmad) product of matrices, we obtain the robot dynamics in parameter-isolated form, from which a new control scheme is developed. The controller proposed yields zero asymptotic tracking errors when applied to robotic systems with time-varying parameters by using a switching type control law. The results obtained are global in the initial state of the robot, and can be applied to rapidly varying systems.

  6. Analysis of a Stabilized CNLF Method with Fast Slow Wave Splittings for Flow Problems

    DOE PAGES

    Jiang, Nan; Tran, Hoang A.

    2015-04-01

    In this work, we study Crank-Nicolson leap-frog (CNLF) methods with fast-slow wave splittings for Navier-Stokes equations (NSE) with a rotation/Coriolis force term, which is a simplification of geophysical flows. We propose a new stabilized CNLF method where the added stabilization completely removes the method's CFL time step condition. A comprehensive stability and error analysis is given. We also prove that for Oseen equations with the rotation term, the unstable mode (for which u(n+1) + u(n-1) equivalent to 0) of CNLF is asymptotically stable. Numerical results are provided to verify the stability and the convergence of the methods.

  7. Adaptive neural network backstepping control for a class of uncertain fractional-order chaotic systems with unknown backlash-like hysteresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yimin; Lv, Hui, E-mail: lvhui207@gmail.com

    In this paper, we consider the control problem of a class of uncertain fractional-order chaotic systems preceded by unknown backlash-like hysteresis nonlinearities based on backstepping control algorithm. We model the hysteresis by using a differential equation. Based on the fractional Lyapunov stability criterion and the backstepping algorithm procedures, an adaptive neural network controller is driven. No knowledge of the upper bound of the disturbance and system uncertainty is required in our controller, and the asymptotical convergence of the tracking error can be guaranteed. Finally, we give two simulation examples to confirm our theoretical results.

  8. Adaptive quaternion tracking with nonlinear extended state observer

    NASA Astrophysics Data System (ADS)

    Bai, Yu-liang; Wang, Xiao-gang; Xu, Jiang-tao; Cui, Nai-gang

    2017-10-01

    This paper addresses the problem of attitude tracking for spacecraft in the presence of uncertainties in moments of inertia and environmental disturbances. An adaptive quaternion tracking control is combined with a nonlinear extended state observer and the disturbances compensated for in each sampling period. The tracking controller is proved to asymptotically track a prescribed motion in the presence of these uncertainties. Simulations of a nano-spacecraft demonstrate a significant improvement in pointing accuracy and tracking error when compared to a conventional attitude controller. The proposed tracking control is completely deterministic, simple to implement, does not require knowledge of the uncertainties and does not suffer from chattering.

  9. Ultra fast quantum key distribution over a 97 km installed telecom fiber with wavelength division multiplexing clock synchronization.

    PubMed

    Tanaka, Akihiro; Fujiwara, Mikio; Nam, Sae W; Nambu, Yoshihiro; Takahashi, Seigo; Maeda, Wakako; Yoshino, Ken-ichiro; Miki, Shigehito; Baek, Burm; Wang, Zhen; Tajima, Akio; Sasaki, Masahide; Tomita, Akihisa

    2008-07-21

    We demonstrated ultra fast BB84 quantum key distribution (QKD) transmission at 625 MHz clock rate through a 97 km field-installed fiber using practical clock synchronization based on wavelength-division multiplexing (WDM). We succeeded in over-one-hour stable key generation at a high sifted key rate of 2.4 kbps and a low quantum bit error rate (QBER) of 2.9%. The asymptotic secure key rate was estimated to be 0.78- 0.82 kbps from the transmission data with the decoy method of average photon numbers 0, 0.15, and 0.4 photons/pulse.

  10. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  11. Adaptive relative pose control of spacecraft with model couplings and uncertainties

    NASA Astrophysics Data System (ADS)

    Sun, Liang; Zheng, Zewei

    2018-02-01

    The spacecraft pose tracking control problem for an uncertain pursuer approaching to a space target is researched in this paper. After modeling the nonlinearly coupled dynamics for relative translational and rotational motions between two spacecraft, position tracking and attitude synchronization controllers are developed independently by using a robust adaptive control approach. The unknown kinematic couplings, parametric uncertainties, and bounded external disturbances are handled with adaptive updating laws. It is proved via Lyapunov method that the pose tracking errors converge to zero asymptotically. Spacecraft close-range rendezvous and proximity operations are introduced as an example to validate the effectiveness of the proposed control approach.

  12. Polynomial Asymptotes of the Second Kind

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2011-01-01

    This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and…

  13. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  14. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  15. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  16. An Analytical Framework for Fast Estimation of Capacity and Performance in Communication Networks

    DTIC Science & Technology

    2012-01-25

    standard random graph (due to Erdos- Renyi ) in the regime where the average degrees remain fixed (and above 1) and the number of nodes get large, is not...abs/1010.3305 (Oct 2010). [6] O. Narayan, I. Saniee, G. H. Tucci, “Lack of Spectral Gap and Hyperbolicity in Asymptotic Erdös- Renyi Random Graphs

  17. Adaptive integral robust control and application to electromechanical servo systems.

    PubMed

    Deng, Wenxiang; Yao, Jianyong

    2017-03-01

    This paper proposes a continuous adaptive integral robust control with robust integral of the sign of the error (RISE) feedback for a class of uncertain nonlinear systems, in which the RISE feedback gain is adapted online to ensure the robustness against disturbances without the prior bound knowledge of the additive disturbances. In addition, an adaptive compensation integrated with the proposed adaptive RISE feedback term is also constructed to further reduce design conservatism when the system also exists parametric uncertainties. Lyapunov analysis reveals the proposed controllers could guarantee the tracking errors are asymptotically converging to zero with continuous control efforts. To illustrate the high performance nature of the developed controllers, numerical simulations are provided. At the end, an application case of an actual electromechanical servo system driven by motor is also studied, with some specific design consideration, and comparative experimental results are obtained to verify the effectiveness of the proposed controllers. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A new smooth robust control design for uncertain nonlinear systems with non-vanishing disturbances

    NASA Astrophysics Data System (ADS)

    Xian, Bin; Zhang, Yao

    2016-06-01

    In this paper, we consider the control problem for a general class of nonlinear system subjected to uncertain dynamics and non-varnishing disturbances. A smooth nonlinear control algorithm is presented to tackle these uncertainties and disturbances. The proposed control design employs the integral of a nonlinear sigmoid function to compensate the uncertain dynamics, and achieve a uniformly semi-global practical asymptotic stable tracking control of the system outputs. A novel Lyapunov-based stability analysis is employed to prove the convergence of the tracking errors and the stability of the closed-loop system. Numerical simulation results on a two-link robot manipulator are presented to illustrate the performance of the proposed control algorithm comparing with the layer-boundary sliding mode controller and the robust of integration of sign of error control design. Furthermore, real-time experiment results for the attitude control of a quadrotor helicopter are also included to confirm the effectiveness of the proposed algorithm.

  19. Astrometry of OH/IR Stars Using 1612 MHz Hydroxyl Masers. I. Annual Parallaxes of WX Psc and OH138.0+7.2

    NASA Astrophysics Data System (ADS)

    Orosz, G.; Imai, H.; Dodson, R.; Rioja, M. J.; Frey, S.; Burns, R. A.; Etoka, S.; Nakagawa, A.; Nakanishi, H.; Asaki, Y.; Goldman, S. R.; Tafoya, D.

    2017-03-01

    We report on the measurement of the trigonometric parallaxes of 1612 MHz hydroxyl masers around two asymptotic giant branch stars, WX Psc and OH 138.0+7.2, using the NRAO Very Long Baseline Array with in-beam phase referencing calibration. We obtain a 3σ upper limit of ≤5.3 mas on the parallax of WX Psc, corresponding to a lower limit distance estimate of ≳190 pc. The obtained parallax of OH 138.0+7.2 is 0.52 ± 0.09 mas (±18%), corresponding to a distance of {1.9}-0.3+0.4 {kpc}, making this the first hydroxyl maser parallax below one milliarcsecond. We also introduce a new method of error analysis for detecting systematic errors in the astrometry. Finally, we compare our trigonometric distances to published phase-lag distances toward these stars and find a good agreement between the two methods.

  20. Local synchronization of chaotic neural networks with sampled-data and saturating actuators.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2014-12-01

    This paper investigates the problem of local synchronization of chaotic neural networks with sampled-data and actuator saturation. A new time-dependent Lyapunov functional is proposed for the synchronization error systems. The advantage of the constructed Lyapunov functional lies in the fact that it is positive definite at sampling times but not necessarily between sampling times, and makes full use of the available information about the actual sampling pattern. A local stability condition of the synchronization error systems is derived, based on which a sampled-data controller with respect to the actuator saturation is designed to ensure that the master neural networks and slave neural networks are locally asymptotically synchronous. Two optimization problems are provided to compute the desired sampled-data controller with the aim of enlarging the set of admissible initial conditions or the admissible sampling upper bound ensuring the local synchronization of the considered chaotic neural networks. A numerical example is used to demonstrate the effectiveness of the proposed design technique.

  1. Clutch pressure estimation for a power-split hybrid transmission using nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Zhou, Bin; Zhang, Jianwu; Gao, Ji; Yu, Haisheng; Liu, Dong

    2018-06-01

    For a power-split hybrid transmission, using the brake clutch to realize the transition from electric drive mode to hybrid drive mode is an available strategy. Since the pressure information of the brake clutch is essential for the mode transition control, this research designs a nonlinear robust reduced-order observer to estimate the brake clutch pressure. Model uncertainties or disturbances are considered as additional inputs, thus the observer is designed in order that the error dynamics is input-to-state stable. The nonlinear characteristics of the system are expressed as the lookup tables in the observer. Moreover, the gain matrix of the observer is solved by two optimization procedures under the constraints of the linear matrix inequalities. The proposed observer is validated by offline simulation and online test, the results have shown that the observer achieves significant performance during the mode transition, as the estimation error is within a reasonable range, more importantly, it is asymptotically stable.

  2. Neural network-based position synchronised internal force control scheme for cooperative manipulator system

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Xu, Fan; Lu, GuoDong

    2017-09-01

    More complex problems of simultaneous position and internal force control occur with cooperative manipulator systems than that of a single one. In the presence of unwanted parametric and modelling uncertainties as well as external disturbances, a decentralised position synchronised force control scheme is proposed. With a feedforward neural network estimating engine, a precise model of the system dynamics is not required. Unlike conventional cooperative or synchronised controllers, virtual position and virtual synchronisation errors are introduced for internal force tracking control and task space position synchronisation. Meanwhile joint space synchronisation and force measurement are unnecessary. Together with simulation studies and analysis, the position and the internal force errors are shown to asymptotically converge to zero. Moreover, the controller exhibits different characteristics with selected synchronisation factors. Under certain settings, it can deal with temporary cooperation by an intelligent retreat mechanism, where less internal force would occur and rigid collision can be avoided. Using a Lyapunov stability approach, the controller is proven to be robust in face of the aforementioned uncertainties.

  3. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  4. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  5. Reduction of medication errors related to sliding scale insulin by the introduction of a standardized order sheet.

    PubMed

    Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori

    2017-06-01

    Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI. © 2016 John Wiley & Sons, Ltd.

  6. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  7. Properties of knotted ring polymers. I. Equilibrium dimensions.

    PubMed

    Mansfield, Marc L; Douglas, Jack F

    2010-07-28

    We report calculations on three classes of knotted ring polymers: (1) simple-cubic lattice self-avoiding rings (SARs), (2) "true" theta-state rings, i.e., SARs generated on the simple-cubic lattice with an attractive nearest-neighbor contact potential (theta-SARs), and (3) ideal, Gaussian rings. Extrapolations to large polymerization index N imply knot localization in all three classes of chains. Extrapolations of our data are also consistent with conjectures found in the literature which state that (1) R(g)-->AN(nu) asymptotically for ensembles of random knots restricted to any particular knot state, including the unknot; (2) A is universal across knot types for any given class of flexible chains; and (3) nu is equal to the standard self-avoiding walk (SAW) exponent (congruent with 0.588) for all three classes of chains (SARs, theta-SARs, and ideal rings). However, current computer technology is inadequate to directly sample the asymptotic domain, so that we remain in a crossover scaling regime for all accessible values of N. We also observe that R(g) approximately p(-0.27), where p is the "rope length" of the maximally inflated knot. This scaling relation holds in the crossover regime, but we argue that it is unlikely to extend into the asymptotic scaling regime where knots become localized.

  8. Hawking radiation power equations for black holes

    NASA Astrophysics Data System (ADS)

    Mistry, Ravi; Upadhyay, Sudhaker; Ali, Ahmed Farag; Faizal, Mir

    2017-10-01

    We derive the Hawking radiation power equations for black holes in asymptotically flat, asymptotically Anti-de Sitter (AdS) and asymptotically de Sitter (dS) black holes. This is done by using the greybody factor for these black holes. We observe that the radiation power equation for asymptotically flat black holes, corresponding to greybody factor at low frequency, depends on both the Hawking temperature and the horizon radius. However, for the greybody factors at asymptotic frequency, it only depends on the Hawking temperature. We also obtain the power equation for asymptotically AdS black holes both below and above the critical frequency. The radiation power equation for at asymptotic frequency is same for both Schwarzschild AdS and Reissner-Nordström AdS solutions and only depends on the Hawking temperature. We also discuss the power equation for asymptotically dS black holes at low frequency, for both even or odd dimensions.

  9. Computer Programs for the Semantic Differential: Further Modifications.

    ERIC Educational Resources Information Center

    Lawson, Edwin D.; And Others

    The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…

  10. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  11. Maxima of two random walks: Universal statistics of lead changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Naim, E.; Krapivsky, P. L.; Randon-Furling, J.

    2016-04-18

    In this study, we investigate statistics of lead changes of the maxima of two discrete-time random walks in one dimension. We show that the average number of lead changes grows asmore » $${\\pi }^{-1}\\mathrm{ln}t$$ in the long-time limit. We present theoretical and numerical evidence that this asymptotic behavior is universal. Specifically, this behavior is independent of the jump distribution: the same asymptotic underlies standard Brownian motion and symmetric Lévy flights. We also show that the probability to have at most n lead changes behaves as $${t}^{-1/4}{(\\mathrm{ln}t)}^{n}$$ for Brownian motion and as $${t}^{-\\beta (\\mu )}{(\\mathrm{ln}t)}^{n}$$ for symmetric Lévy flights with index μ. The decay exponent $$\\beta \\equiv \\beta (\\mu )$$ varies continuously with the Lévy index when $$0\\lt \\mu \\lt 2$$, and remains constant $$\\beta =1/4$$ for $$\\mu \\gt 2$$.« less

  12. Contaminant transport from point source on water surface in open channel flow with bed absorption

    NASA Astrophysics Data System (ADS)

    Guo, Jinlan; Wu, Xudong; Jiang, Weiquan; Chen, Guoqian

    2018-06-01

    Studying solute dispersion in channel flows is of significance for environmental and industrial applications. Two-dimensional concentration distribution for a most typical case of a point source release on the free water surface in a channel flow with bed absorption is presented by means of Chatwin's long-time asymptotic technique. Five basic characteristics of Taylor dispersion and vertical mean concentration distribution with skewness and kurtosis modifications are also analyzed. The results reveal that bed absorption affects both the longitudinal and vertical concentration distributions and causes the contaminant cloud to concentrate in the upper layer. Additionally, the cross-sectional concentration distribution shows an asymptotic Gaussian distribution at large time which is unaffected by the bed absorption. The vertical concentration distribution is found to be nonuniform even at large time. The obtained results are essential for practical implements with strict environmental standards.

  13. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  14. Increasing point-count duration increases standard error

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.

    1998-01-01

    We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.

  15. Augmented reality fluoroscopy simulation of the guide-wire insertion in DHS surgery: A proof of concept study.

    PubMed

    van Duren, B H; Sugand, K; Wescott, R; Carrington, R; Hart, A

    2018-05-01

    Hip fractures contribute to a significant clinical burden globally with over 1.6 million cases per annum and up to 30% mortality rate within the first year. Insertion of a dynamic hip screw (DHS) is a frequently performed procedure to treat extracapsular neck of femur fractures. Poorly performed DHS fixation of extracapsular neck of femur fractures can result in poor mobilisation, chronic pain, and increased cut-out rate requiring revision surgery. A realistic, affordable, and portable fluoroscopic simulation system can improve performance metrics in trainees, including the tip-apex distance (the only clinically validated outcome), and improve outcomes. We developed a digital fluoroscopic imaging simulator using orthogonal cameras to track coloured markers attached to the guide-wire which created a virtual overlay on fluoroscopic images of the hip. To test the accuracy with which the augmented reality system could track a guide-wire, a standard workshop femur was used to calibrate the system with a positional marker fixed to indicate the apex; this allowed for comparison between guide-wire tip-apex distance (TAD) calculated by the system to be compared to that physically measured. Tests were undertaken to determine: (1) how well the apex could be targeted; (2) the accuracy of the calculated TAD. (3) The number of iterations through the algorithm giving the optimal accuracy-time relationship. The calculated TAD was found to have an average root mean square error of 4.2 mm. The accuracy of the algorithm was shown to increase with the number of iterations up to 20 beyond which the error asymptotically converged to an error of 2 mm. This work demonstrates a novel augmented reality simulation of guide-wire insertion in DHS surgery. To our knowledge this has not been previously achieved. In contrast to virtual reality, augmented reality is able to simulate fluoroscopy while allowing the trainee to interact with real instrumentation and performing the procedure on workshop bone models. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Supersymmetric asymptotic safety is not guaranteed

    DOE PAGES

    Intriligator, Kenneth; Sannino, Francesco

    2015-11-05

    It was recently shown that certain perturbatively accessible, non-supersymmetric gauge-Yukawa theories have UV asymptotic safety, without asymptotic freedom: the UV theory is an interacting RG fixed point, and the IR theory is free. We here investigate the possibility of asymptotic safety in supersymmetric theories, and use unitarity bounds, and the a-theorem, to rule it out in broad classes of theories. The arguments apply without assuming perturbation theory. Therefore, the UV completion of a non-asymptotically free susy theory must have additional, non-obvious degrees of freedom, such as those of an asymptotically free (perhaps magnetic dual) extension.

  17. Asymptotic symmetries on Killing horizons

    NASA Astrophysics Data System (ADS)

    Koga, Jun-Ichirou

    2001-12-01

    We investigate asymptotic symmetries regularly defined on spherically symmetric Killing horizons in Einstein theory with or without the cosmological constant. These asymptotic symmetries are described by asymptotic Killing vectors, along which the Lie derivatives of perturbed metrics vanish on a Killing horizon. We derive the general form of the asymptotic Killing vectors and find that the group of asymptotic symmetries consists of rigid O(3) rotations of a horizon two-sphere and supertranslations along the null direction on the horizon, which depend arbitrarily on the null coordinate as well as the angular coordinates. By introducing the notion of asymptotic Killing horizons, we also show that local properties of Killing horizons are preserved not only under diffeomorphisms but also under nontrivial transformations generated by the asymptotic symmetry group. Although the asymptotic symmetry group contains the Diff(S1) subgroup, which results from supertranslations dependent only on the null coordinate, it is shown that the Poisson brackets algebra of the conserved charges conjugate to asymptotic Killing vectors does not acquire nontrivial central charges. Finally, by considering extended symmetries, we discuss the fact that unnatural reduction of the symmetry group is necessary in order to obtain the Virasoro algebra with nontrivial central charges, which is not justified when we respect the spherical symmetry of Killing horizons.

  18. A simple all-time model for the birth, big bang, and death of the universe

    NASA Astrophysics Data System (ADS)

    Fischer, Arthur E.

    We model the standard ΛCDM model of the universe by the spatially flat FLRW line element dsΛCDM2 = -c2dt2 + 8πGρm,0 Λc22/3 sinh 1 23Λct4/3dσ Euclid2 which we extend for all time t ∈ (-∞,∞). Although there is a cosmological singularity at the big bang t = 0, since the spatial part of the metric collapses to zero, nevertheless, this line element is defined for all time t ∈ (-∞,∞), is C∞ for all t≠0, is C1 differentiable at t = 0, and is non-degenerate and solves Friedmann’s equation for all t≠0. Thus, we can use this extended line element to model the universe from its past-asymptotic initial state dS4- at t = -∞, through the big bang at t = 0, and onward to its future-asymptotic final state dS4+ at t = ∞. Since in this model the universe existed before the big bang, we conclude that (1) the universe was not created de novo at the big bang and (2) cosmological singularities such as black holes or the big bang itself need not be an end to spacetime. Our model shows that the universe was asymptotically created de novo out of nothing at t = -∞ from an unstable vacuum negative half de Sitter dsdS4-2 initial state and then dies asymptotically at t = ∞ as the stable positive half de Sitter dsdS4+2 final state. Since the de Sitter states are vacuum matter states, our model shows that the universe was created from nothing at t = -∞ and dies at t = ∞ to nothing.

  19. Fisher information for two gamma frailty bivariate Weibull models.

    PubMed

    Bjarnason, H; Hougaard, P

    2000-03-01

    The asymptotic properties of frailty models for multivariate survival data are not well understood. To study this aspect, the Fisher information is derived in the standard bivariate gamma frailty model, where the survival distribution is of Weibull form conditional on the frailty. For comparison, the Fisher information is also derived in the bivariate gamma frailty model, where the marginal distribution is of Weibull form.

  20. A Didactic Presentation of Snijders's "l[subscript z]*" Index of Person Fit with Emphasis on Response Model Selection and Ability Estimation

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles; Beland, Sebastien

    2012-01-01

    This paper focuses on two likelihood-based indices of person fit, the index "l[subscript z]" and the Snijders's modified index "l[subscript z]*". The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample…

  1. Biases and Standard Errors of Standardized Regression Coefficients

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2011-01-01

    The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…

  2. Stokes phenomena in discrete Painlevé II.

    PubMed

    Joshi, N; Lustri, C J; Luu, S

    2017-02-01

    We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation.

  3. Stokes phenomena in discrete Painlevé II

    PubMed Central

    Joshi, N.

    2017-01-01

    We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation. PMID:28293132

  4. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  5. Asymptotic structure of space-time with a positive cosmological constant

    NASA Astrophysics Data System (ADS)

    Kesavan, Aruna

    In general relativity a satisfactory framework for describing isolated systems exists when the cosmological constant Lambda is zero. The detailed analysis of the asymptotic structure of the gravitational field, which constitutes the framework of asymptotic flatness, lays the foundation for research in diverse areas in gravitational science. However, the framework is incomplete in two respects. First, asymptotic flatness provides well-defined expressions for physical observables such as energy and momentum as 'charges' of asymptotic symmetries at null infinity, [special character omitted] +. But the asymptotic symmetry group, called the Bondi-Metzner-Sachs group is infinite-dimensional and a tensorial expression for the 'charge' integral of an arbitrary BMS element is missing. We address this issue by providing a charge formula which is a 2-sphere integral over fields local to the 2-sphere and refers to no extraneous structure. The second, and more significant shortcoming is that observations have established that Lambda is not zero but positive in our universe. Can the framework describing isolated systems and their gravitational radiation be extended to incorporate this fact? In this dissertation we show that, unfortunately, the standard framework does not extend from the Lambda = 0 case to the Lambda > 0 case in a physically useful manner. In particular, we do not have an invariant notion of gravitational waves in the non-linear regime, nor an analog of the Bondi 'news tensor', nor positive energy theorems. In addition, we argue that the stronger boundary condition of conformal flatness of intrinsic metric on [special character omitted]+, which reduces the asymptotic symmetry group from Diff([special character omitted]) to the de Sitter group, is insufficient to characterize gravitational fluxes and is physically unreasonable. To obtain guidance for the full non-linear theory with Lambda > 0, linearized gravitational waves in de Sitter space-time are analyzed in detail. i) We show explicitly that conformal flatness of the boundary removes half the degrees of freedom of the gravitational field by hand and is not justified by physical considerations; ii) We obtain gauge invariant expressions of energy-momentum and angular momentum fluxes carried by gravitational waves in terms of fields defined at [special character omitted]+; iii) We demonstrate that the flux formulas reduce to the familiar ones in Minkowski spacetime in spite of the fact that the limit Lambda → 0 is discontinuous (since, in particular, [special character omitted]+ changes its space-like character to null in the limit); iv) We obtain a generalization of Einstein's 1918 quadrupole formula for power emission by a linearized source to include a positive Lambda; and, finally v) We show that, although energy of linearized gravitational waves can be arbitrarily negative in general, gravitational waves emitted by physically reasonable sources carry positive energy.

  6. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  7. Extreme between-study homogeneity in meta-analyses could offer useful insights.

    PubMed

    Ioannidis, John P A; Trikalinos, Thomas A; Zintzaras, Elias

    2006-10-01

    Meta-analyses are routinely evaluated for the presence of large between-study heterogeneity. We examined whether it is also important to probe whether there is extreme between-study homogeneity. We used heterogeneity tests with left-sided statistical significance for inference and developed a Monte Carlo simulation test for testing extreme homogeneity in risk ratios across studies, using the empiric distribution of the summary risk ratio and heterogeneity statistic. A left-sided P=0.01 threshold was set for claiming extreme homogeneity to minimize type I error. Among 11,803 meta-analyses with binary contrasts from the Cochrane Library, 143 (1.21%) had left-sided P-value <0.01 for the asymptotic Q statistic and 1,004 (8.50%) had left-sided P-value <0.10. The frequency of extreme between-study homogeneity did not depend on the number of studies in the meta-analyses. We identified examples where extreme between-study homogeneity (left-sided P-value <0.01) could result from various possibilities beyond chance. These included inappropriate statistical inference (asymptotic vs. Monte Carlo), use of a specific effect metric, correlated data or stratification using strong predictors of outcome, and biases and potential fraud. Extreme between-study homogeneity may provide useful insights about a meta-analysis and its constituent studies.

  8. Scalar and tensor perturbations in loop quantum cosmology: high-order corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Tao; Wang, Anzhong; Wu, Qiang

    2015-10-01

    Loop quantum cosmology (LQC) provides promising resolutions to the trans-Planckian issue and initial singularity arising in the inflationary models of general relativity. In general, due to different quantization approaches, LQC involves two types of quantum corrections, the holonomy and inverse-volume, to both of the cosmological background evolution and perturbations. In this paper, using the third-order uniform asymptotic approximations, we derive explicitly the observational quantities of the slow-roll inflation in the framework of LQC with these quantum corrections. We calculate the power spectra, spectral indices, and running of the spectral indices for both scalar and tensor perturbations, whereby the tensor-to-scalar ratiomore » is obtained. We expand all the observables at the time when the inflationary mode crosses the Hubble horizon. As the upper error bounds for the uniform asymptotic approximation at the third-order are ∼< 0.15%, these results represent the most accurate results obtained so far in the literature. It is also shown that with the inverse-volume corrections, both scalar and tensor spectra exhibit a deviation from the usual shape at large scales. Then, using the Planck, BAO and SN data we obtain new constraints on quantum gravitational effects from LQC corrections, and find that such effects could be within the detection of the forthcoming experiments.« less

  9. On the renormalisation of the diffusion asymptotics in the problem of reflection of a narrow optical beam from a biological medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appanov, A Yu; Barabanenkov, Yu N

    2005-12-31

    An analytic hybrid method is considered for solving the stationary radiation transfer equation in the problem on reflection of a narrow laser beam from biological media such as the 2% aqueous solution of intralipid and erythrocyte suspension with the volume concentration (hematocrit) H=0.41. The method is based on the reciprocity of the Green function in the radiation transfer theory and on the iteration solution of the integral equation for this function. As a result, the ray intensity is represented as a sum of two terms. The first of them describes the contribution of finite-order scattering to the intensity of amore » beam diffusely reflected from the medium. The second term contains the explicit analytic expression for a spatially distributed effective source of diffuse radiation emerging from the deep layers of the medium to the surface. This approach substantially improves the diffusion approximation for the problem under study and allows one to obtain the uniform asymptotics of the reflection coefficient at the specified interval of distances between the radiation source and detector on the medium surface with the relative error within {+-}6% for the 2% intralipid emulsion and erythrocyte suspension (H=0.41). (radiation scattering)« less

  10. A direct method for the solution of unsteady two-dimensional incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ghia, K. N.; Osswald, G. A.; Ghia, U.

    1983-01-01

    The unsteady incompressible Navier-Stokes equations are formulated in terms of vorticity and stream function in generalized curvilinear orthogonal coordinates to facilitiate analysis of flow configurations with general geometries. The numerical method developed solves the conservative form of the transport equation using the alternating-direction implicit method, whereas the stream-function equation is solved by direct block Gaussian elimination. The method is applied to a model problem of flow over a back-step in a doubly infinite channel, using clustered conformal coordinates. One-dimensional stretching functions, dependent on the Reynolds number and the asymptotic behavior of the flow, are used to provide suitable grid distribution in the separation and reattachment regions, as well as in the inflow and outflow regions. The optimum grid distribution selected attempts to honor the multiple length scales of the separated-flow model problem. The asymptotic behavior of the finite-differenced transport equation near infinity is examined and the numerical method is carefully developed so as to lead to spatially second-order accurate wiggle-free solutions, i.e., with minimum dispersive error. Results have been obtained in the entire laminar range for the backstep channel and are in good agreement with the available experimental data for this flow problem.

  11. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  12. Algorithm for Stabilizing a POD-Based Dynamical System

    NASA Technical Reports Server (NTRS)

    Kalb, Virginia L.

    2010-01-01

    This algorithm provides a new way to improve the accuracy and asymptotic behavior of a low-dimensional system based on the proper orthogonal decomposition (POD). Given a data set representing the evolution of a system of partial differential equations (PDEs), such as the Navier-Stokes equations for incompressible flow, one may obtain a low-dimensional model in the form of ordinary differential equations (ODEs) that should model the dynamics of the flow. Temporal sampling of the direct numerical simulation of the PDEs produces a spatial time series. The POD extracts the temporal and spatial eigenfunctions of this data set. Truncated to retain only the most energetic modes followed by Galerkin projection of these modes onto the PDEs obtains a dynamical system of ordinary differential equations for the time-dependent behavior of the flow. In practice, the steps leading to this system of ODEs entail numerically computing first-order derivatives of the mean data field and the eigenfunctions, and the computation of many inner products. This is far from a perfect process, and often results in the lack of long-term stability of the system and incorrect asymptotic behavior of the model. This algorithm describes a new stabilization method that utilizes the temporal eigenfunctions to derive correction terms for the coefficients of the dynamical system to significantly reduce these errors.

  13. Asymptotically reliable transport of multimedia/graphics over wireless channels

    NASA Astrophysics Data System (ADS)

    Han, Richard Y.; Messerschmitt, David G.

    1996-03-01

    We propose a multiple-delivery transport service tailored for graphics and video transported over connections with wireless access. This service operates at the interface between the transport and application layers, balancing the subjective delay and image quality objectives of the application with the low reliability and limited bandwidth of the wireless link. While techniques like forward-error correction, interleaving and retransmission improve reliability over wireless links, they also increase latency substantially when bandwidth is limited. Certain forms of interactive multimedia datatypes can benefit from an initial delivery of a corrupt packet to lower the perceptual latency, as long as reliable delivery occurs eventually. Multiple delivery of successively refined versions of the received packet, terminating when a sufficiently reliable version arrives, exploits the redundancy inherently required to improve reliability without a traffic penalty. Modifications to acknowledgment-repeat-request (ARQ) methods to implement this transport service are proposed, which we term `leaky ARQ'. For the specific case of pixel-coded window-based text/graphics, we describe additional functions needed to more effectively support urgent delivery and asymptotic reliability. X server emulation suggests that users will accept a multi-second delay between a (possibly corrupt) packet and the ultimate reliably-delivered version. The relaxed delay for reliable delivery can be exploited for traffic capacity improvement using scheduling of retransmissions.

  14. Group Variable Selection Via Convex Log-Exp-Sum Penalty with Application to a Breast Cancer Survivor Study

    PubMed Central

    Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace

    2017-01-01

    Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196

  15. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  16. MRAC Control with Prior Model Knowledge for Asymmetric Damaged Aircraft

    PubMed Central

    Zhang, Jing

    2015-01-01

    This paper develops a novel state-tracking multivariable model reference adaptive control (MRAC) technique utilizing prior knowledge of plant models to recover control performance of an asymmetric structural damaged aircraft. A modification of linear model representation is given. With prior knowledge on structural damage, a polytope linear parameter varying (LPV) model is derived to cover all concerned damage conditions. An MRAC method is developed for the polytope model, of which the stability and asymptotic error convergence are theoretically proved. The proposed technique reduces the number of parameters to be adapted and thus decreases computational cost and requires less input information. The method is validated by simulations on NASA generic transport model (GTM) with damage. PMID:26180839

  17. Experimental realization of equiangular three-state quantum key distribution

    PubMed Central

    Schiavon, Matteo; Vallone, Giuseppe; Villoresi, Paolo

    2016-01-01

    Quantum key distribution using three states in equiangular configuration combines a security threshold comparable with the one of the Bennett-Brassard 1984 protocol and a quantum bit error rate (QBER) estimation that does not need to reveal part of the key. We implement an entanglement-based version of the Renes 2004 protocol, using only passive optic elements in a linear scheme for the positive-operator valued measure (POVM), generating an asymptotic secure key rate of more than 10 kbit/s, with a mean QBER of 1.6%. We then demonstrate its security in the case of finite key and evaluate the key rate for both collective and general attacks. PMID:27465643

  18. A Simple Approach to Achieve Modified Projective Synchronization between Two Different Chaotic Systems

    PubMed Central

    2013-01-01

    A new approach, the projective system approach, is proposed to realize modified projective synchronization between two different chaotic systems. By simple analysis of trajectories in the phase space, a projective system of the original chaotic systems is obtained to replace the errors system to judge the occurrence of modified projective synchronization. Theoretical analysis and numerical simulations show that, although the projective system may not be unique, modified projective synchronization can be achieved provided that the origin of any of projective systems is asymptotically stable. Furthermore, an example is presented to illustrate that even a necessary and sufficient condition for modified projective synchronization can be derived by using the projective system approach. PMID:24187522

  19. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    PubMed

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  1. Transformations of asymptotically AdS hyperbolic initial data and associated geometric inequalities

    NASA Astrophysics Data System (ADS)

    Cha, Ye Sle; Khuri, Marcus

    2018-01-01

    We construct transformations which take asymptotically AdS hyperbolic initial data into asymptotically flat initial data, and which preserve relevant physical quantities. This is used to derive geometric inequalities in the asymptotically AdS hyperbolic setting from counterparts in the asymptotically flat realm, whenever a geometrically motivated system of elliptic equations admits a solution. The inequalities treated here relate mass, angular momentum, charge, and horizon area. Furthermore, new mass-angular momentum inequalities in this setting are conjectured and discussed.

  2. Asymptotic coefficients for one-interacting-level Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Lovett, R. J.

    1988-02-01

    The asymptotic behavior of general Voigt profiles with general width and shift functions has been determined by Cope and Lovett (1987). The resulting asymptotic coefficients are functions of the perturber/radiator mass ratio; also, the coefficients for the one-interacting-level (OIL) profiles proposed by Ward et al. (1974) were studied. In this paper, the behavior of the OIL asymptotic coefficients for large mass ratio values is determined, thereby providing a complete picture of OIL asymptotics for all mass ratios.

  3. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.

  4. A new convergence analysis and perturbation resilience of some accelerated proximal forward-backward algorithms with errors

    NASA Astrophysics Data System (ADS)

    Reem, Daniel; De Pierro, Alvaro

    2017-04-01

    Many problems in science and engineering involve, as part of their solution process, the consideration of a separable function which is the sum of two convex functions, one of them possibly non-smooth. Recently a few works have discussed inexact versions of several accelerated proximal methods aiming at solving this minimization problem. This paper shows that inexact versions of a method of Beck and Teboulle (fast iterative shrinkable tresholding algorithm) preserve, in a Hilbert space setting, the same (non-asymptotic) rate of convergence under some assumptions on the decay rate of the error terms The notion of inexactness discussed here seems to be rather simple, but, interestingly, when comparing to related works, closely related decay rates of the errors terms yield closely related convergence rates. The derivation sheds some light on the somewhat mysterious origin of some parameters which appear in various accelerated methods. A consequence of the analysis is that the accelerated method is perturbation resilient, making it suitable, in principle, for the superiorization methodology. By taking this into account, we re-examine the superiorization methodology and significantly extend its scope. This work was supported by FAPESP 2013/19504-9. The second author was supported also by CNPq grant 306030/2014-4.

  5. Propagation of coherent light pulses with PHASE

    NASA Astrophysics Data System (ADS)

    Bahrdt, J.; Flechsig, U.; Grizzoli, W.; Siewert, F.

    2014-09-01

    The current status of the software package PHASE for the propagation of coherent light pulses along a synchrotron radiation beamline is presented. PHASE is based on an asymptotic expansion of the Fresnel-Kirchhoff integral (stationary phase approximation) which is usually truncated at the 2nd order. The limits of this approximation as well as possible extensions to higher orders are discussed. The accuracy is benchmarked against a direct integration of the Fresnel-Kirchhoff integral. Long range slope errors of optical elements can be included by means of 8th order polynomials in the optical element coordinates w and l. Only recently, a method for the description of short range slope errors has been implemented. The accuracy of this method is evaluated and examples for realistic slope errors are given. PHASE can be run either from a built-in graphical user interface or from any script language. The latter method provides substantial flexibility. Optical elements including apertures can be combined. Complete wave packages can be propagated, as well. Fourier propagators are included in the package, thus, the user may choose between a variety of propagators. Several means to speed up the computation time were tested - among them are the parallelization in a multi core environment and the parallelization on a cluster.

  6. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  7. Predicting the geographic distribution of a species from presence-only data subject to detection errors

    USGS Publications Warehouse

    Dorazio, Robert M.

    2012-01-01

    Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.

  8. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  9. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  10. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  11. Evaluation of lens distortion errors in video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo

    1993-01-01

    In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.

  12. On stability of fixed points and chaos in fractional systems.

    PubMed

    Edelman, Mark

    2018-02-01

    In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0<α<2. The method is tested on various forms of fractional generalizations of the standard and logistic maps. Based on our analysis, we make a conjecture that chaos is impossible in the corresponding continuous fractional systems.

  13. Phase noise analysis of voltage controlled oscillator used in cesium atomic clock

    NASA Astrophysics Data System (ADS)

    Zhi, Menghui; Tang, Liang; Qiao, Donghai

    2017-03-01

    Coherent population trapping (CPT) cesium frequency standard plays a significant role in precision guidance of missile and global positioning system (GPS). Low noise 4.596 GHz voltage controlled oscillator (VCO) is an indispensable part of microwave signal source in cesium frequency standard. Low-phase noise is also the most important and difficult performance indicator of VCO. Starting from phase noise analysis method proposed by Leeson, the formulas about the relationship between phase noise of output signal of oscillator feedback model and phase fluctuation spectrum of amplifier, phase noise of oscillator are derived in this paper. Finally, the asymptote model of microwave oscillator is proposed based on the formula derivation. The experiment shows that when the reverse bias voltage of variode is 1.8 V, the designed oscillation frequency of VCO is 4.596 GHz, the power is -1 dBm and the DC power consumption is 19.6 mW. The tendency of phase noise simulation curve and actual test curve conform to asymptote model. The phase noise in 1 and 10 kHz is, respectively, -60.86 and -86.58 dBc/Hz. The significance of the paper lies in determining the main factors influencing oscillator phase noise and providing guiding direction for the design of low-phase noise VCO.

  14. Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.

    PubMed

    Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L

    2018-05-01

    Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.

  15. Intimate Partner Violence, 1993-2010

    MedlinePlus

    ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...

  16. Time-varying effect moderation using the structural nested mean model: estimation using inverse-weighted regression with residuals

    PubMed Central

    Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.

    2014-01-01

    This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437

  17. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  18. Sensory feedback in a bump attractor model of path integration.

    PubMed

    Poll, Daniel B; Nguyen, Khanh; Kilpatrick, Zachary P

    2016-04-01

    Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal's current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal's knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541-4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.

  19. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  20. Decreasing patient identification band errors by standardizing processes.

    PubMed

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  1. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  2. Polynomial asymptotes of the second kind

    NASA Astrophysics Data System (ADS)

    Dobbs, David E.

    2011-03-01

    This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and conics. Prerequisites include the division algorithm for polynomials with coefficients in the field of real numbers and elementary facts about limits from calculus. This note could be used as enrichment material in courses ranging from Calculus to Real Analysis to Abstract Algebra.

  3. Asympotics with positive cosmological constant

    NASA Astrophysics Data System (ADS)

    Bonga, Beatrice; Ashtekar, Abhay; Kesavan, Aruna

    2014-03-01

    Since observations to date imply that our universe has a positive cosmological constant, one needs an extension of the theory of isolated systems and gravitational radiation in full general relativity from the asymptotically flat to asymptotically de Sitter space-times. In current definitions, one mimics the boundary conditions used in asymptotically AdS context to conclude that the asymptotic symmetry group is the de Sitter group. However, these conditions severely restricts radiation and in fact rules out non-zero flux of energy, momentum and angular momentum carried by gravitational waves. Therefore, these formulations of asymptotically de Sitter space-times are uninteresting beyond non-radiative spacetimes. The situation is compared and contrasted with conserved charges and fluxes at null infinity in asymptotically flat space-times.

  4. Asymptotic approximations to posterior distributions via conditional moment equations

    USGS Publications Warehouse

    Yee, J.L.; Johnson, W.O.; Samaniego, F.J.

    2002-01-01

    We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.

  5. Asymptotics of eigenvalues and eigenvectors of Toeplitz matrices

    NASA Astrophysics Data System (ADS)

    Böttcher, A.; Bogoya, J. M.; Grudsky, S. M.; Maximenko, E. A.

    2017-11-01

    Analysis of the asymptotic behaviour of the spectral characteristics of Toeplitz matrices as the dimension of the matrix tends to infinity has a history of over 100 years. For instance, quite a number of versions of Szegő's theorem on the asymptotic behaviour of eigenvalues and of the so-called strong Szegő theorem on the asymptotic behaviour of the determinants of Toeplitz matrices are known. Starting in the 1950s, the asymptotics of the maximum and minimum eigenvalues were actively investigated. However, investigation of the individual asymptotics of all the eigenvalues and eigenvectors of Toeplitz matrices started only quite recently: the first papers on this subject were published in 2009-2010. A survey of this new field is presented here. Bibliography: 55 titles.

  6. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    PubMed

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Pharmacogenetic excitation of dorsomedial prefrontal cortex restores fear prediction error.

    PubMed

    Yau, Joanna Oi-Yue; McNally, Gavan P

    2015-01-07

    Pavlovian conditioning involves encoding the predictive relationship between a conditioned stimulus (CS) and an unconditioned stimulus, so that synaptic plasticity and learning is instructed by prediction error. Here we used pharmacogenetic techniques to show a causal relation between activity of rat dorsomedial prefrontal cortex (dmPFC) neurons and fear prediction error. We expressed the excitatory hM3Dq designer receptor exclusively activated by a designer drug (DREADD) in dmPFC and isolated actions of prediction error by using an associative blocking design. Rats were trained to fear the visual CS (CSA) in stage I via pairings with footshock. Then in stage II, rats received compound presentations of visual CSA and auditory CS (CSB) with footshock. This prior fear conditioning of CSA reduced the prediction error during stage II to block fear learning to CSB. The group of rats that received AAV-hSYN-eYFP vector that was treated with clozapine-N-oxide (CNO; 3 mg/kg, i.p.) before stage II showed blocking when tested in the absence of CNO the next day. In contrast, the groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were treated with CNO before stage II training did not show blocking; learning toward CSB was restored. This restoration of prediction error and fear learning was specific to the injection of CNO because groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were injected with vehicle before stage II training did show blocking. These effects were not attributable to the DREADD manipulation enhancing learning or arousal, increasing fear memory strength or asymptotic levels of fear learning, or altering fear memory retrieval. Together, these results identify a causal role for dmPFC in a signature of adaptive behavior: using the past to predict future danger and learning from errors in these predictions. Copyright © 2015 the authors 0270-6474/15/350074-10$15.00/0.

  8. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  9. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    PubMed

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  10. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  11. State space truncation with quantified errors for accurate solutions to discrete Chemical Master Equation

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653

  12. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  13. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-22

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  14. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  15. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  16. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    PubMed

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  17. Asymptotic Safety Guaranteed in Supersymmetry

    NASA Astrophysics Data System (ADS)

    Bond, Andrew D.; Litim, Daniel F.

    2017-11-01

    We explain how asymptotic safety arises in four-dimensional supersymmetric gauge theories. We provide asymptotically safe supersymmetric gauge theories together with their superconformal fixed points, R charges, phase diagrams, and UV-IR connecting trajectories. Strict perturbative control is achieved in a Veneziano limit. Consistency with unitarity and the a theorem is established. We find that supersymmetry enhances the predictivity of asymptotically safe theories.

  18. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  19. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  20. Exceeding the Asymptotic Limit of Polymer Drag Reduction.

    PubMed

    Choueiri, George H; Lopez, Jose M; Hof, Björn

    2018-03-23

    The drag of turbulent flows can be drastically decreased by adding small amounts of high molecular weight polymers. While drag reduction initially increases with polymer concentration, it eventually saturates to what is known as the maximum drag reduction (MDR) asymptote; this asymptote is generally attributed to the dynamics being reduced to a marginal yet persistent state of subdued turbulent motion. Contrary to this accepted view, we show that, for an appropriate choice of parameters, polymers can reduce the drag beyond the suggested asymptotic limit, eliminating turbulence and giving way to laminar flow. At higher polymer concentrations, however, the laminar state becomes unstable, resulting in a fluctuating flow with the characteristic drag of the MDR asymptote. Our findings indicate that the asymptotic state is hence dynamically disconnected from ordinary turbulence.

  1. Exceeding the Asymptotic Limit of Polymer Drag Reduction

    NASA Astrophysics Data System (ADS)

    Choueiri, George H.; Lopez, Jose M.; Hof, Björn

    2018-03-01

    The drag of turbulent flows can be drastically decreased by adding small amounts of high molecular weight polymers. While drag reduction initially increases with polymer concentration, it eventually saturates to what is known as the maximum drag reduction (MDR) asymptote; this asymptote is generally attributed to the dynamics being reduced to a marginal yet persistent state of subdued turbulent motion. Contrary to this accepted view, we show that, for an appropriate choice of parameters, polymers can reduce the drag beyond the suggested asymptotic limit, eliminating turbulence and giving way to laminar flow. At higher polymer concentrations, however, the laminar state becomes unstable, resulting in a fluctuating flow with the characteristic drag of the MDR asymptote. Our findings indicate that the asymptotic state is hence dynamically disconnected from ordinary turbulence.

  2. A METHOD TO EXTRACT THE REDSHIFT DISTORTION {beta} PARAMETER IN CONFIGURATION SPACE FROM MINIMAL COSMOLOGICAL ASSUMPTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tocchini-Valentini, Domenico; Barnard, Michael; Bennett, Charles L.

    2012-10-01

    We present a method to extract the redshift-space distortion {beta} parameter in configuration space with a minimal set of cosmological assumptions. We show that a novel combination of the observed monopole and quadrupole correlation functions can remove efficiently the impact of mild nonlinearities and redshift errors. The method offers a series of convenient properties: it does not depend on the theoretical linear correlation function, the mean galaxy density is irrelevant, only convolutions are used, and there is no explicit dependence on linear bias. Analyses based on dark matter N-body simulations and Fisher matrix demonstrate that errors of a few percentmore » on {beta} are possible with a full-sky, 1 (h {sup -1} Gpc){sup 3} survey centered at a redshift of unity and with negligible shot noise. We also find a baryonic feature in the normalized quadrupole in configuration space that should complicate the extraction of the growth parameter from the linear theory asymptote, but that does not have a major impact on our method.« less

  3. Smoothing the Marmousi Model

    NASA Astrophysics Data System (ADS)

    Žáček, K.

    Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.

  4. A novel adaptive control method for induction motor based on Backstepping approach using dSpace DS 1104 control board

    NASA Astrophysics Data System (ADS)

    Ben Regaya, Chiheb; Farhani, Fethi; Zaafouri, Abderrahmen; Chaari, Abdelkader

    2018-02-01

    This paper presents a new adaptive Backstepping technique to handle the induction motor (IM) rotor resistance tracking problem. The proposed solution leads to improve the robustness of the control system. Given the presence of static error when estimating the rotor resistance with classical methods, and the sensitivity to the load torque variation at low speed, a new Backstepping observer enhanced with an integral action of the tracking errors is presented, which can be established in two steps. The first one consists to estimate the rotor flux using a Backstepping observer. The second step, defines the adaptation mechanism of the rotor resistance based on the estimated rotor-flux. The asymptotic stability of the observer is proven by Lyapunov theory. To validate the proposed solution, a simulation and experimental benchmarking of a 3 kW induction motor are presented and analyzed. The obtained results show the effectiveness of the proposed solution compared to the model reference adaptive system (MRAS) rotor resistance observer presented in other recent works.

  5. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Relaxation Property and Stability Analysis of the Quasispecies Models

    NASA Astrophysics Data System (ADS)

    Feng, Xiao-Li; Li, Yu-Xiao; Gu, Jian-Zhong; Zhuo, Yi-Zhong

    2009-10-01

    The relaxation property of both Eigen model and Crow-Kimura model with a single peak fitness landscape is studied from phase transition point of view. We first analyze the eigenvalue spectra of the replication mutation matrices. For sufficiently long sequences, the almost crossing point between the largest and second-largest eigenvalues locates the error threshold at which critical slowing down behavior appears. We calculate the critical exponent in the limit of infinite sequence lengths and compare it with the result from numerical curve fittings at sufficiently long sequences. We find that for both models the relaxation time diverges with exponent 1 at the error (mutation) threshold point. Results obtained from both methods agree quite well. From the unlimited correlation length feature, the first order phase transition is further confirmed. Finally with linear stability theory, we show that the two model systems are stable for all ranges of mutation rate. The Eigen model is asymptotically stable in terms of mutant classes, and the Crow-Kimura model is completely stable.

  6. Based on interval type-2 fuzzy-neural network direct adaptive sliding mode control for SISO nonlinear systems

    NASA Astrophysics Data System (ADS)

    Lin, Tsung-Chih

    2010-12-01

    In this paper, a novel direct adaptive interval type-2 fuzzy-neural tracking control equipped with sliding mode and Lyapunov synthesis approach is proposed to handle the training data corrupted by noise or rule uncertainties for nonlinear SISO nonlinear systems involving external disturbances. By employing adaptive fuzzy-neural control theory, the update laws will be derived for approximating the uncertain nonlinear dynamical system. In the meantime, the sliding mode control method and the Lyapunov stability criterion are incorporated into the adaptive fuzzy-neural control scheme such that the derived controller is robust with respect to unmodeled dynamics, external disturbance and approximation errors. In comparison with conventional methods, the advocated approach not only guarantees closed-loop stability but also the output tracking error of the overall system will converge to zero asymptotically without prior knowledge on the upper bound of the lumped uncertainty. Furthermore, chattering effect of the control input will be substantially reduced by the proposed technique. To illustrate the performance of the proposed method, finally simulation example will be given.

  7. Adaptive Approximation-Based Regulation Control for a Class of Uncertain Nonlinear Systems Without Feedback Linearizability.

    PubMed

    Wang, Ning; Sun, Jing-Chao; Han, Min; Zheng, Zhongjiu; Er, Meng Joo

    2017-09-06

    In this paper, for a general class of uncertain nonlinear (cascade) systems, including unknown dynamics, which are not feedback linearizable and cannot be solved by existing approaches, an innovative adaptive approximation-based regulation control (AARC) scheme is developed. Within the framework of adding a power integrator (API), by deriving adaptive laws for output weights and prediction error compensation pertaining to single-hidden-layer feedforward network (SLFN) from the Lyapunov synthesis, a series of SLFN-based approximators are explicitly constructed to exactly dominate completely unknown dynamics. By the virtue of significant advancements on the API technique, an adaptive API methodology is eventually established in combination with SLFN-based adaptive approximators, and it contributes to a recursive mechanism for the AARC scheme. As a consequence, the output regulation error can asymptotically converge to the origin, and all other signals of the closed-loop system are uniformly ultimately bounded. Simulation studies and comprehensive comparisons with backstepping- and API-based approaches demonstrate that the proposed AARC scheme achieves remarkable performance and superiority in dealing with unknown dynamics.

  8. Nonlinear Tracking Control of a Conductive Supercoiled Polymer Actuator.

    PubMed

    Luong, Tuan Anh; Cho, Kyeong Ho; Song, Min Geun; Koo, Ja Choon; Choi, Hyouk Ryeol; Moon, Hyungpil

    2018-04-01

    Artificial muscle actuators made from commercial nylon fishing lines have been recently introduced and shown as a new type of actuator with high performance. However, the actuators also exhibit significant nonlinearities, which make them difficult to control, especially in precise trajectory-tracking applications. In this article, we present a nonlinear mathematical model of a conductive supercoiled polymer (SCP) actuator driven by Joule heating for model-based feedback controls. Our efforts include modeling of the hysteresis behavior of the actuator. Based on nonlinear modeling, we design a sliding mode controller for SCP actuator-driven manipulators. The system with proposed control law is proven to be asymptotically stable using the Lyapunov theory. The control performance of the proposed method is evaluated experimentally and compared with that of a proportional-integral-derivative (PID) controller through one-degree-of-freedom SCP actuator-driven manipulators. Experimental results show that the proposed controller's performance is superior to that of a PID controller, such as the tracking errors are nearly 10 times smaller compared with those of a PID controller, and it is more robust to external disturbances such as sensor noise and actuator modeling error.

  9. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  10. The proposed coding standard at GSFC

    NASA Technical Reports Server (NTRS)

    Morakis, J. C.; Helgert, H. J.

    1977-01-01

    As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.

  11. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Relative entropy of entanglement and restricted measurements.

    PubMed

    Piani, M

    2009-10-16

    We introduce variants of relative entropy of entanglement based on the optimal distinguishability from unentangled states by means of restricted measurements. In this way we are able to prove that the standard regularized entropy of entanglement is strictly positive for all multipartite entangled states. This implies that the asymptotic creation of a multipartite entangled state by means of local operations and classical communication always requires the consumption of a nonlocal resource at a strictly positive rate.

  13. Consistency Properties for Growth Model Parameters Under an Infill Asymptotics Domain

    DTIC Science & Technology

    2010-09-01

    Gompertz in 1825 [15], was initially used for actuarial projections. Winsor’s 1932 reparameterization of the Gompertz curve in [38] is given by f(t;K, a, b...these assumptions it is possible to construct a pathological example which, while mathematically interesting, is of no practical use to a practitioner...Abramowitz, Milton and Irene A. Stegun. Handbook of Mathematical Functions . Washington D.C.: National Bureau of Standards, 1972. [2] Allgower, E. L

  14. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  15. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  16. 78 FR 17155 - Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.

  17. A comparison of registration errors with imageless computer navigation during MIS total knee arthroplasty versus standard incision total knee arthroplasty: a cadaveric study.

    PubMed

    Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H

    2015-01-01

    Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.

  18. Universal properties of knotted polymer rings.

    PubMed

    Baiesi, M; Orlandini, E

    2012-09-01

    By performing Monte Carlo sampling of N-steps self-avoiding polygons embedded on different Bravais lattices we explore the robustness of universality in the entropic, metric, and geometrical properties of knotted polymer rings. In particular, by simulating polygons with N up to 10(5) we furnish a sharp estimate of the asymptotic values of the knot probability ratios and show their independence on the lattice type. This universal feature was previously suggested, although with different estimates of the asymptotic values. In addition, we show that the scaling behavior of the mean-squared radius of gyration of polygons depends on their knot type only through its correction to scaling. Finally, as a measure of the geometrical self-entanglement of the self-avoiding polygons we consider the standard deviation of the writhe distribution and estimate its power-law behavior in the large N limit. The estimates of the power exponent do depend neither on the lattice nor on the knot type, strongly supporting an extension of the universality property to some features of the geometrical entanglement.

  19. Compressible, multiphase semi-implicit method with moment of fluid interface representation

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Arienti, Marco

    2014-09-16

    A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less

  20. Sheet on a deformable sphere: Wrinkle patterns suppress curvature-induced delamination

    NASA Astrophysics Data System (ADS)

    Hohlfeld, Evan; Davidovitch, Benny

    2015-01-01

    The adhesion of a stiff film onto a curved substrate often generates elastic stresses in the film that eventually give rise to its delamination. Here we predict that delamination of very thin films can be dramatically suppressed through tiny, smooth deformations of the substrate, dubbed here "wrinklogami," that barely affect the macro-scale topography. This "prolamination" effect reflects a surprising capability of smooth wrinkles to suppress compression in elastic films even when spherical or other doubly curved topography is imposed, in a similar fashion to origami folds that enable construction of curved structures from an unstretchable paper. We show that the emergence of a wrinklogami pattern signals a nontrivial isometry of the sheet to its planar, undeformed state, in the doubly asymptotic limit of small thickness and weak tensile load exerted by the adhesive substrate. We explain how such an "asymptotic isometry" concept broadens the standard usage of isometries for describing the response of elastic sheets to geometric constraints and mechanical loads.

  1. Higgsploding universe

    NASA Astrophysics Data System (ADS)

    Khoze, Valentin V.; Spannowsky, Michael

    2017-10-01

    Higgsplosion is a dynamical mechanism that introduces an exponential suppression of quantum fluctuations beyond the Higgsplosion energy scale E* and further guarantees perturbative unitarity in multi-Higgs production processes. By calculating the Higgsplosion scale for spin 0, 1 /2 , 1 and 2 particles at leading order, we argue that Higgsplosion regulates all n-point functions, thereby embedding the standard model of particle physics and its extensions into an asymptotically safe theory. There are no Landau poles and the Higgs self-coupling stays positive. Asymptotic safety is of particular interest for theories of particle physics that include quantum gravity. We argue that in a Hippsloding theory one cannot probe shorter and shorter length scales by increasing the energy of the collision beyond the Higgsplosion energy and there is a minimal length set by r*˜1 /E* that can be probed. We further show that Higgsplosion is consistent and not in conflict with models of inflation and the existence of axions. There is also a possibility of testing Higgsplosion experimentally at future high energy experiments.

  2. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  3. Finite-time fault tolerant attitude stabilization control for rigid spacecraft.

    PubMed

    Huo, Xing; Hu, Qinglei; Xiao, Bing

    2014-03-01

    A sliding mode based finite-time control scheme is presented to address the problem of attitude stabilization for rigid spacecraft in the presence of actuator fault and external disturbances. More specifically, a nonlinear observer is first proposed to reconstruct the amplitude of actuator faults and external disturbances. It is proved that precise reconstruction with zero observer error is achieved in finite time. Then, together with the system states, the reconstructed information is used to synthesize a nonsingular terminal sliding mode attitude controller. The attitude and the angular velocity are asymptotically governed to zero with finite-time convergence. A numerical example is presented to demonstrate the effectiveness of the proposed scheme. © 2013 Published by ISA on behalf of ISA.

  4. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    PubMed

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Adaptive control of an exoskeleton robot with uncertainties on kinematics and dynamics.

    PubMed

    Brahmi, Brahim; Saad, Maarouf; Ochoa-Luna, Cristobal; Rahman, Mohammad H

    2017-07-01

    In this paper, we propose a new adaptive control technique based on nonlinear sliding mode control (JSTDE) taking into account kinematics and dynamics uncertainties. This approach is applied to an exoskeleton robot with uncertain kinematics and dynamics. The adaptation design is based on Time Delay Estimation (TDE). The proposed strategy does not necessitate the well-defined dynamic and kinematic models of the system robot. The updated laws are designed using Lyapunov-function to solve the adaptation problem systematically, proving the close loop stability and ensuring the convergence asymptotically of the outputs tracking errors. Experiments results show the effectiveness and feasibility of JSTDE technique to deal with the variation of the unknown nonlinear dynamics and kinematics of the exoskeleton model.

  6. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  7. Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.

    PubMed

    Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng

    2011-10-01

    This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.

  8. Driven Metadynamics: Reconstructing Equilibrium Free Energies from Driven Adaptive-Bias Simulations

    PubMed Central

    2013-01-01

    We present a novel free-energy calculation method that constructively integrates two distinct classes of nonequilibrium sampling techniques, namely, driven (e.g., steered molecular dynamics) and adaptive-bias (e.g., metadynamics) methods. By employing nonequilibrium work relations, we design a biasing protocol with an explicitly time- and history-dependent bias that uses on-the-fly work measurements to gradually flatten the free-energy surface. The asymptotic convergence of the method is discussed, and several relations are derived for free-energy reconstruction and error estimation. Isomerization reaction of an atomistic polyproline peptide model is used to numerically illustrate the superior efficiency and faster convergence of the method compared with its adaptive-bias and driven components in isolation. PMID:23795244

  9. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  10. Improved BDF Relaying Scheme Using Time Diversity over Atmospheric Turbulence and Misalignment Fading Channels

    PubMed Central

    García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2014-01-01

    A novel bit-detect-and-forward (BDF) relaying scheme based on repetition coding with the relay is proposed, significantly improving the robustness to impairments proper to free-space optical (FSO) communications such as unsuitable alignment between transmitter and receiver as well as fluctuations in the irradiance of the transmitted optical beam due to the atmospheric turbulence. Closed-form asymptotic bit-error-rate (BER) expressions are derived for a 3-way FSO communication setup. Fully exploiting the potential time-diversity available in the relay turbulent channel, a relevant better performance is achieved, showing a greater robustness to the relay location since a high diversity gain is provided regardless of the source-destination link distance. PMID:24587711

  11. Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance

    NASA Astrophysics Data System (ADS)

    Cao, Fangfei; Liu, Jinkun

    2018-05-01

    In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.

  12. Dynamical system analysis of interacting models

    NASA Astrophysics Data System (ADS)

    Carneiro, S.; Borges, H. A.

    2018-01-01

    We perform a dynamical system analysis of a cosmological model with linear dependence between the vacuum density and the Hubble parameter, with constant-rate creation of dark matter. We show that the de Sitter spacetime is an asymptotically stable critical point, future limit of any expanding solution. Our analysis also shows that the Minkowski spacetime is an unstable critical point, which eventually collapses to a singularity. In this way, such a prescription for the vacuum decay not only predicts the correct future de Sitter limit, but also forbids the existence of a stable Minkowski universe. We also study the effect of matter creation on the growth of structures and their peculiar velocities, showing that it is inside the current errors of redshift space distortions observations.

  13. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  14. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  15. Improved Filon-type asymptotic methods for highly oscillatory differential equations with multiple time scales

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Wu, Xinyuan

    2014-11-01

    In this paper we consider multi-frequency highly oscillatory second-order differential equations x″ (t) + Mx (t) = f (t , x (t) ,x‧ (t)) where high-frequency oscillations are generated by the linear part Mx (t), and M is positive semi-definite (not necessarily nonsingular). It is known that Filon-type methods are effective approach to numerically solving highly oscillatory problems. Unfortunately, however, existing Filon-type asymptotic methods fail to apply to the highly oscillatory second-order differential equations when M is singular. We study and propose an efficient improvement on the existing Filon-type asymptotic methods, so that the improved Filon-type asymptotic methods can be able to numerically solving this class of multi-frequency highly oscillatory systems with a singular matrix M. The improved Filon-type asymptotic methods are designed by combining Filon-type methods with the asymptotic methods based on the variation-of-constants formula. We also present one efficient and practical improved Filon-type asymptotic method which can be performed at lower cost. Accompanying numerical results show the remarkable efficiency.

  16. Lexico-Semantic Errors of the Learners of English: A Survey of Standard Seven Keiyo-Speaking Primary School Pupils in Keiyo District, Kenya

    ERIC Educational Resources Information Center

    Jeptarus, Kipsamo E.; Ngene, Patrick K.

    2016-01-01

    The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…

  17. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  18. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  19. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  20. A family of four stages embedded explicit six-step methods with eliminated phase-lag and its derivatives for the numerical solution of the second order problems

    NASA Astrophysics Data System (ADS)

    Simos, T. E.

    2017-11-01

    A family of four stages high algebraic order embedded explicit six-step methods, for the numerical solution of second order initial or boundary-value problems with periodical and/or oscillating solutions, are studied in this paper. The free parameters of the new proposed methods are calculated solving the linear system of equations which is produced by requesting the vanishing of the phase-lag of the methods and the vanishing of the phase-lag's derivatives of the schemes. For the new obtained methods we investigate: • Its local truncation error (LTE) of the methods.• The asymptotic form of the LTE obtained using as model problem the radial Schrödinger equation.• The comparison of the asymptotic forms of LTEs for several methods of the same family. This comparison leads to conclusions on the efficiency of each method of the family.• The stability and the interval of periodicity of the obtained methods of the new family of embedded finite difference pairs.• The applications of the new obtained family of embedded finite difference pairs to the numerical solution of several second order problems like the radial Schrödinger equation, astronomical problems etc. The above applications lead to conclusion on the efficiency of the methods of the new family of embedded finite difference pairs.

  1. Validity of using large-density asymptotics for studying reaction-infiltration instability in fluid-saturated rocks

    NASA Astrophysics Data System (ADS)

    Zhao, Chongbin; Hobbs, B. E.; Ord, A.

    2018-04-01

    Reaction-infiltration instability, in which chemical reactions can dissolve minerals and therefore create preferential pore-fluid flow channels in fluid-saturated rocks, may play an important role in controlling groundwater quality in groundwater hydrology. Although this topic has been studied for many years, there is a recent debate, which says that the use of large-density asymptotics in the previous studies is invalid. However, there is a crucial conceptual mistake in this debate, which leads to results and conclusions that are inconsistent with the fundamental laws of physics. It is well known that in terms of distance, time and velocity, there are only two independent variables. But they are treated as three independent variables, a procedure that is the main source of the physically unrealistic results and conclusions in the debate. In this paper, we will discuss the results and conclusions related to the debate, with emphasis on the issues leading to the corresponding errors. In particular, we demonstrate that there is an unappreciated constraint condition between the dimensional/dimensionless distance, time and velocity in the debate. By using this constraint condition, it can be confirmed that as the ratio of the reactant concentration in the incoming fluid stream to the mineral concentration approaches zero, the dimensionless transport parameter, H, automatically approaches infinity. Therefore, it is further confirmed that the previous work conducted by Chadam and others remains valid.

  2. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  3. Using a higher criticism statistic to detect modest effects in a genome-wide study of rheumatoid arthritis

    PubMed Central

    2009-01-01

    In high-dimensional studies such as genome-wide association studies, the correction for multiple testing in order to control total type I error results in decreased power to detect modest effects. We present a new analytical approach based on the higher criticism statistic that allows identification of the presence of modest effects. We apply our method to the genome-wide study of rheumatoid arthritis provided in the Genetic Analysis Workshop 16 Problem 1 data set. There is evidence for unknown bias in this study that could be explained by the presence of undetected modest effects. We compared the asymptotic and empirical thresholds for the higher criticism statistic. Using the asymptotic threshold we detected the presence of modest effects genome-wide. We also detected modest effects using 90th percentile of the empirical null distribution as a threshold; however, there is no such evidence when the 95th and 99th percentiles were used. While the higher criticism method suggests that there is some evidence for modest effects, interpreting individual single-nucleotide polymorphisms with significant higher criticism statistics is of undermined value. The goal of higher criticism is to alert the researcher that genetic effects remain to be discovered and to promote the use of more targeted and powerful studies to detect the remaining effects. PMID:20018032

  4. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  5. Cookbook asymptotics for spiral and scroll waves in excitable media.

    PubMed

    Margerit, Daniel; Barkley, Dwight

    2002-09-01

    Algebraic formulas predicting the frequencies and shapes of waves in a reaction-diffusion model of excitable media are presented in the form of four recipes. The formulas themselves are based on a detailed asymptotic analysis (published elsewhere) of the model equations at leading order and first order in the asymptotic parameter. The importance of the first order contribution is stressed throughout, beginning with a discussion of the Fife limit, Fife scaling, and Fife regime. Recipes are given for spiral waves and detailed comparisons are presented between the asymptotic predictions and the solutions of the full reaction-diffusion equations. Recipes for twisted scroll waves with straight filaments are given and again comparisons are shown. The connection between the asymptotic results and filament dynamics is discussed, and one of the previously unknown coefficients in the theory of filament dynamics is evaluated in terms of its asymptotic expansion. (c) 2002 American Institute of Physics.

  6. Asymptotic proportionality (weak ergodicity) and conditional asymptotic equality of solutions to time-heterogeneous sublinear difference and differential equations

    NASA Astrophysics Data System (ADS)

    Thieme, Horst R.

    The concept of asymptotic proportionality and conditional asymptotic equality which is presented here aims at making global asymptotic stability statements for time-heterogeneous difference and differential equations. For such non-autonomous problems (apart from special cases) no prominent special solutions (equilibra, periodic solutions) exist which are natural candidates for the asymptotic behaviour of arbitrary solutions. One way out of this dilemma consists in looking for conditions under which any two solutions to the problem (with different initial conditions) behave in a similar or even the same way as time tends to infinity. We study a general sublinear difference equation in an ordered Banach space and, for illustration, time-heterogeneous versions of several well-known differential equations modelling the spread of gonorrhea in a heterogeneous population, the spread of a vector-borne infectious disease, and the dynamics of a logistically growing spatially diffusing population.

  7. Cookbook asymptotics for spiral and scroll waves in excitable media

    NASA Astrophysics Data System (ADS)

    Margerit, Daniel; Barkley, Dwight

    2002-09-01

    Algebraic formulas predicting the frequencies and shapes of waves in a reaction-diffusion model of excitable media are presented in the form of four recipes. The formulas themselves are based on a detailed asymptotic analysis (published elsewhere) of the model equations at leading order and first order in the asymptotic parameter. The importance of the first order contribution is stressed throughout, beginning with a discussion of the Fife limit, Fife scaling, and Fife regime. Recipes are given for spiral waves and detailed comparisons are presented between the asymptotic predictions and the solutions of the full reaction-diffusion equations. Recipes for twisted scroll waves with straight filaments are given and again comparisons are shown. The connection between the asymptotic results and filament dynamics is discussed, and one of the previously unknown coefficients in the theory of filament dynamics is evaluated in terms of its asymptotic expansion.

  8. Assessment of the reliability of standard automated perimetry in regions of glaucomatous damage.

    PubMed

    Gardiner, Stuart K; Swanson, William H; Goren, Deborah; Mansberger, Steven L; Demirel, Shaban

    2014-07-01

    Visual field testing uses high-contrast stimuli in areas of severe visual field loss. However, retinal ganglion cells saturate with high-contrast stimuli, suggesting that the probability of detecting perimetric stimuli may not increase indefinitely as contrast increases. Driven by this concept, this study examines the lower limit of perimetric sensitivity for reliable testing by standard automated perimetry. Evaluation of a diagnostic test. A total of 34 participants with moderate to severe glaucoma; mean deviation at their last clinic visit averaged -10.90 dB (range, -20.94 to -3.38 dB). A total of 75 of the 136 locations tested had a perimetric sensitivity of ≤ 19 dB. Frequency-of-seeing curves were constructed at 4 nonadjacent visual field locations by the Method of Constant Stimuli (MOCS), using 35 stimulus presentations at each of 7 contrasts. Locations were chosen a priori and included at least 2 with glaucomatous damage but a sensitivity of ≥ 6 dB. Cumulative Gaussian curves were fit to the data, first assuming a 5% false-negative rate and subsequently allowing the asymptotic maximum response probability to be a free parameter. The strength of the relation (R(2)) between perimetric sensitivity (mean of last 2 clinic visits) and MOCS sensitivity (from the experiment) for all locations with perimetric sensitivity within ± 4 dB of each selected value, at 0.5 dB intervals. Bins centered at sensitivities ≥ 19 dB always had R(2) >0.1. All bins centered at sensitivities ≤ 15 dB had R(2) <0.1, an indication that sensitivities are unreliable. No consistent conclusions could be drawn between 15 and 19 dB. At 57 of the 81 locations with perimetric sensitivity <19 dB, including 49 of the 63 locations ≤ 15 dB, the fitted asymptotic maximum response probability was <80%, consistent with the hypothesis of response saturation. At 29 of these locations the asymptotic maximum was <50%, and so contrast sensitivity (50% response rate) is undefined. Clinical visual field testing may be unreliable when visual field locations have sensitivity below approximately 15 to 19 dB because of a reduction in the asymptotic maximum response probability. Researchers and clinicians may have difficulty detecting worsening sensitivity in these visual field locations, and this difficulty may occur commonly in patients with glaucoma with moderate to severe glaucomatous visual field loss. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  9. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  10. Errors in quantitative backscattered electron analysis of bone standardized by energy-dispersive x-ray spectrometry.

    PubMed

    Vajda, E G; Skedros, J G; Bloebaum, R D

    1998-10-01

    Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.

  11. Analysis of DGPS/INS and MLS/INS final approach navigation errors and control performance data

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Spitzer, Cary R.

    1992-01-01

    Flight tests were conducted jointly by NASA Langley Research Center and Honeywell, Inc., on a B-737 research aircraft to record a data base for evaluating the performance of a differential DGPS/inertial navigation system (INS) which used GPS Course/Acquisition code receivers. Estimates from the DGPS/INS and a Microwave Landing System (MLS)/INS, and various aircraft parameter data were recorded in real time aboard the aircraft while flying along the final approach path to landing. This paper presents the mean and standard deviation of the DGPS/INS and MLS/INS navigation position errors computed relative to the laser tracker system and of the difference between the DGPS/INS and MLS/INS velocity estimates. RMS errors are presented for DGPS/INS and MLS/INS guidance errors (localizer and glideslope). The mean navigation position errors and standard deviation of the x position coordinate of the DGPS/INS and MLS/INS systems were found to be of similar magnitude while the standard deviation of the y and z position coordinate errors were significantly larger for DGPS/INS compared to MLS/INS.

  12. Conditional Standard Errors of Measurement for Scale Scores.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  13. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  14. Entropy of black holes in N=2 supergravity

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.

    2018-07-01

    Using the formalism of isolated horizons, we construct space of solutions of asymptotically flat extremal black holes in N=2 pure supergravity in 4 dimensions. We prove that the laws of black hole mechanics hold for these black holes. Further, restricting to constant area phase space, we show that the spherical horizons admit a Chern-Simons theory. Standard way of quantizing this topological theory and counting states confirms that entropy is indeed proportional to the area of horizon.

  15. The cosmological constant and the energy of gravitational radiation

    NASA Astrophysics Data System (ADS)

    Chruściel, Piotr T.; Ifsits, Lukas

    2016-06-01

    We propose a definition of mass for characteristic hypersurfaces in asymptotically vacuum space-times with nonvanishing cosmological constant Λ ∈R* , generalizing the definition of Trautman and Bondi for Λ =0 . We show that our definition reduces to some standard definitions in several situations. We establish a balance formula linking the characteristic mass and a suitably defined renormalized volume of the null hypersurface, generalizing the positivity identity proved by Chruściel and Paetz when Λ =0 .

  16. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    NASA Astrophysics Data System (ADS)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  17. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  18. Robust best linear estimator for Cox regression with instrumental variables in whole cohort and surrogates with additive measurement error in calibration sample

    PubMed Central

    Wang, Ching-Yun; Song, Xiao

    2017-01-01

    SUMMARY Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women’s Health Initiative. PMID:27546625

  19. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  20. Conditional standard errors of measurement for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

    PubMed

    Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun

    2006-02-01

    A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

Top