Sample records for classical measurement error

  1. A Rasch Perspective

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Smith, Everett V., Jr.

    2007-01-01

    Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…

  2. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    NASA Astrophysics Data System (ADS)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  4. A toolkit for measurement error correction, with a focus on nutritional epidemiology

    PubMed Central

    Keogh, Ruth H; White, Ian R

    2014-01-01

    Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385

  5. Multiple indicators, multiple causes measurement error models

    DOE PAGES

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...

    2014-06-25

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less

  6. Multiple Indicators, Multiple Causes Measurement Error Models

    PubMed Central

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-01-01

    Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535

  7. Multiple indicators, multiple causes measurement error models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less

  8. Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials

    PubMed Central

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212

  9. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  10. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan

    Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less

  11. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  12. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  13. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  14. The Importance of Relying on the Manual: Scoring Error Variance in the WISC-IV Vocabulary Subtest

    ERIC Educational Resources Information Center

    Erdodi, Laszlo A.; Richard, David C. S.; Hopwood, Christopher

    2009-01-01

    Classical test theory assumes that ability level has no effect on measurement error. Newer test theories, however, argue that the precision of a measurement instrument changes as a function of the examinee's true score. Research has shown that administration errors are common in the Wechsler scales and that subtests requiring subjective scoring…

  15. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  16. Reliability of a Longitudinal Sequence of Scale Ratings

    ERIC Educational Resources Information Center

    Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony

    2009-01-01

    Reliability captures the influence of error on a measurement and, in the classical setting, is defined as one minus the ratio of the error variance to the total variance. Laenen, Alonso, and Molenberghs ("Psychometrika" 73:443-448, 2007) proposed an axiomatic definition of reliability and introduced the R[subscript T] coefficient, a measure of…

  17. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  18. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  19. High-Threshold Low-Overhead Fault-Tolerant Classical Computation and the Replacement of Measurements with Unitary Quantum Gates.

    PubMed

    Cruikshank, Benjamin; Jacobs, Kurt

    2017-07-21

    von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.

  20. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  1. Beyond alpha: an empirical examination of the effects of different sources of measurement error on reliability estimates for measures of individual differences constructs.

    PubMed

    Schmidt, Frank L; Le, Huy; Ilies, Remus

    2003-06-01

    On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.

  2. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  3. Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.

    PubMed

    Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J

    2001-09-01

    In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.

  4. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    PubMed

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  5. Random measurement error: Why worry? An example of cardiovascular risk factors.

    PubMed

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  6. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  7. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  8. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  9. Activation of zero-error classical capacity in low-dimensional quantum systems

    NASA Astrophysics Data System (ADS)

    Park, Jeonghoon; Heo, Jun

    2018-06-01

    Channel capacities of quantum channels can be nonadditive even if one of two quantum channels has no channel capacity. We call this phenomenon activation of the channel capacity. In this paper, we show that when we use a quantum channel on a qubit system, only a noiseless qubit channel can generate the activation of the zero-error classical capacity. In particular, we show that the zero-error classical capacity of two quantum channels on qubit systems cannot be activated. Furthermore, we present a class of examples showing the activation of the zero-error classical capacity in low-dimensional systems.

  10. Demonstrating the Difference between Classical Test Theory and Item Response Theory Using Derived Test Data

    ERIC Educational Resources Information Center

    Magno, Carlo

    2009-01-01

    The present report demonstrates the difference between classical test theory (CTT) and item response theory (IRT) approach using an actual test data for chemistry junior high school students. The CTT and IRT were compared across two samples and two forms of test on their item difficulty, internal consistency, and measurement errors. The specific…

  11. A contemporary approach to the problem of determining physical parameters according to the results of measurements

    NASA Technical Reports Server (NTRS)

    Elyasberg, P. Y.

    1979-01-01

    The shortcomings of the classical approach are set forth, and the newer methods resulting from these shortcomings are explained. The problem was approached with the assumption that the probabilities of error were known, as well as without knowledge of the distribution of the probabilities of error. The advantages of the newer approach are discussed.

  12. Quantum Steering Inequality with Tolerance for Measurement-Setting Errors: Experimentally Feasible Signature of Unbounded Violation

    NASA Astrophysics Data System (ADS)

    Rutkowski, Adam; Buraczewski, Adam; Horodecki, Paweł; Stobińska, Magdalena

    2017-01-01

    Quantum steering is a relatively simple test for proving that the values of quantum-mechanical measurement outcomes come into being only in the act of measurement. By exploiting quantum correlations, Alice can influence—steer—Bob's physical system in a way that is impossible in classical mechanics, as shown by the violation of steering inequalities. Demonstrating this and similar quantum effects for systems of increasing size, approaching even the classical limit, is a long-standing challenging problem. Here, we prove an experimentally feasible unbounded violation of a steering inequality. We derive its universal form where tolerance for measurement-setting errors is explicitly built in by means of the Deutsch-Maassen-Uffink entropic uncertainty relation. Then, generalizing the mutual unbiasedness, we apply the inequality to the multisinglet and multiparticle bipartite Bell state. However, the method is general and opens the possibility of employing multiparticle bipartite steering for randomness certification and development of quantum technologies, e.g., random access codes.

  13. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    PubMed

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  14. Errata report on Herbert Goldstein's Classical Mechanics: Second edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.; Hoffman, F.M.

    This report describes errors in Herbert Goldstein's textbook Classical Mechanics, Second Edition (Copyright 1980, ISBN 0-201-02918-9). Some of the errors in current printings of the text were corrected in the second printing; however, after communicating with Addison Wesley, the publisher for Classical Mechanics, it was discovered that the corrected galley proofs had been lost by the printer and that no one had complained of any errors in the eleven years since the second printing. The errata sheet corrects errors from all printings of the second edition.

  15. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  16. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  17. Public classical communication in quantum cryptography: Error correction, integrity, and authentication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timofeev, A. V.; Pomozov, D. I.; Makkaveev, A. P.

    2007-05-15

    Quantum cryptography systems combine two communication channels: a quantum and a classical one. (They can be physically implemented in the same fiber-optic link, which is employed as a quantum channel when one-photon states are transmitted and as a classical one when it carries classical data traffic.) Both channels are supposed to be insecure and accessible to an eavesdropper. Error correction in raw keys, interferometer balancing, and other procedures are performed by using the public classical channel. A discussion of the requirements to be met by the classical channel is presented.

  18. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error

    PubMed Central

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.

    2017-01-01

    SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018

  19. Reliability of Total Test Scores When Considered as Ordinal Measurements

    ERIC Educational Resources Information Center

    Biswas, Ajoy Kumar

    2006-01-01

    This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…

  20. Experimental evaluation of nonclassical correlations between measurement outcomes and target observable in a quantum measurement

    NASA Astrophysics Data System (ADS)

    Iinuma, Masataka; Suzuki, Yutaro; Nii, Taiki; Kinoshita, Ryuji; Hofmann, Holger F.

    2016-03-01

    In general, it is difficult to evaluate measurement errors when the initial and final conditions of the measurement make it impossible to identify the correct value of the target observable. Ozawa proposed a solution based on the operator algebra of observables which has recently been used in experiments investigating the error-disturbance trade-off of quantum measurements. Importantly, this solution makes surprisingly detailed statements about the relations between measurement outcomes and the unknown target observable. In the present paper, we investigate this relation by performing a sequence of two measurements on the polarization of a photon, so that the first measurement commutes with the target observable and the second measurement is sensitive to a complementary observable. While the initial measurement can be evaluated using classical statistics, the second measurement introduces the effects of quantum correlations between the noncommuting physical properties. By varying the resolution of the initial measurement, we can change the relative contribution of the nonclassical correlations and identify their role in the evaluation of the quantum measurement. It is shown that the most striking deviation from classical expectations is obtained at the transition between weak and strong measurements, where the competition between different statistical effects results in measurement values well outside the range of possible eigenvalues.

  1. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less

  2. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  3. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  4. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  5. Power of one nonclean qubit

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi

    2017-04-01

    The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.

  6. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  7. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  8. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  9. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  10. Comparison between laser interferometric and calibrated artifacts for the geometric test of machine tools

    NASA Astrophysics Data System (ADS)

    Sousa, Andre R.; Schneider, Carlos A.

    2001-09-01

    A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.

  11. Making classical ground-state spin computing fault-tolerant.

    PubMed

    Crosson, I J; Bacon, D; Brown, K R

    2010-09-01

    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.

  12. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    PubMed

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array

    PubMed Central

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan

    2018-01-01

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742

  14. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.

    PubMed

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan

    2018-05-05

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.

  15. A new method of measuring gravitational acceleration in an undergraduate laboratory program

    NASA Astrophysics Data System (ADS)

    Wang, Qiaochu; Wang, Chang; Xiao, Yunhuan; Schulte, Jurgen; Shi, Qingfan

    2018-01-01

    This paper presents a high accuracy method to measure gravitational acceleration in an undergraduate laboratory program. The experiment is based on water in a cylindrical vessel rotating about its vertical axis at a constant speed. The water surface forms a paraboloid whose focal length is related to rotational period and gravitational acceleration. This experimental setup avoids classical source errors in determining the local value of gravitational acceleration, so prevalent in the common simple pendulum and inclined plane experiments. The presented method combines multiple physics concepts such as kinematics, classical mechanics and geometric optics, offering the opportunity for lateral as well as project-based learning.

  16. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  17. Measurement error in environmental epidemiology and the shape of exposure-response curves.

    PubMed

    Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E

    2011-09-01

    Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.

  18. Coarse initial orbit determination for a geostationary satellite using single-epoch GPS measurements.

    PubMed

    Kim, Ghangho; Kim, Chongwon; Kee, Changdon

    2015-04-01

    A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO) satellite using single-epoch measurements from a Global Positioning System (GPS) receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite's state, even when it is impossible to apply the classical single-point solutions (SPS). Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF) tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state.

  19. Coarse Initial Orbit Determination for a Geostationary Satellite Using Single-Epoch GPS Measurements

    PubMed Central

    Kim, Ghangho; Kim, Chongwon; Kee, Changdon

    2015-01-01

    A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO) satellite using single-epoch measurements from a Global Positioning System (GPS) receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite’s state, even when it is impossible to apply the classical single-point solutions (SPS). Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF) tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state. PMID:25835299

  20. Classical simulation of quantum error correction in a Fibonacci anyon code

    NASA Astrophysics Data System (ADS)

    Burton, Simon; Brell, Courtney G.; Flammia, Steven T.

    2017-02-01

    Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.

  1. HUMAN EYE OPTICS: Determination of positions of optical elements of the human eye

    NASA Astrophysics Data System (ADS)

    Galetskii, S. O.; Cherezova, T. Yu

    2009-02-01

    An original method for noninvasive determining the positions of elements of intraocular optics is proposed. The analytic dependence of the measurement error on the optical-scheme parameters and the restriction in distance from the element being measured are determined within the framework of the method proposed. It is shown that the method can be efficiently used for determining the position of elements in the classical Gullstrand eye model and personalised eye models. The positions of six optical surfaces of the Gullstrand eye model and four optical surfaces of the personalised eye model can be determined with an error of less than 0.25 mm.

  2. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  3. Measuring Viscosities of Gases at Atmospheric Pressure

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Hoshang, Chegini

    1987-01-01

    Variant of general capillary method for measuring viscosities of unknown gases based on use of thermal mass-flowmeter section for direct measurement of pressure drops. In technique, flowmeter serves dual role, providing data for determining volume flow rates and serving as well-characterized capillary-tube section for measurement of differential pressures across it. New method simple, sensitive, and adaptable for absolute or relative viscosity measurements of low-pressure gases. Suited for very complex hydrocarbon mixtures where limitations of classical theory and compositional errors make theoretical calculations less reliable.

  4. Concerning the Video Drift Method to Measure Double Stars

    NASA Astrophysics Data System (ADS)

    Nugent, Richard L.; Iverson, Ernest W.

    2015-05-01

    Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.

  5. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model

    PubMed Central

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.

    2014-01-01

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625

  6. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  7. surrosurv: An R package for the evaluation of failure time surrogate endpoints in individual patient data meta-analyses of randomized clinical trials.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan

    2018-03-01

    Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The prediction of speech intelligibility in classrooms using computer models

    NASA Astrophysics Data System (ADS)

    Dance, Stephen; Dentoni, Roger

    2005-04-01

    Two classrooms were measured and modeled using the industry standard CATT model and the Web model CISM. Sound levels, reverberation times and speech intelligibility were predicted in these rooms using data for 7 octave bands. It was found that overall sound levels could be predicted to within 2 dB by both models. However, overall reverberation time was found to be accurately predicted by CATT 14% prediction error, but not by CISM, 41% prediction error. This compared to a 30% prediction error using classical theory. As for STI: CATT predicted within 11%, CISM to within 3% and Sabine to within 28% of the measured value. It should be noted that CISM took approximately 15 seconds to calculate, while CATT took 15 minutes. CISM is freely available on-line at www.whyverne.co.uk/acoustics/Pages/cism/cism.html

  9. Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.

    PubMed

    Cotton, Sue M; Crewther, David P; Crewther, Sheila G

    2005-08-01

    The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.

  10. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  11. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  12. Continuous quantum measurements and the action uncertainty principle

    NASA Astrophysics Data System (ADS)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  13. Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants.

    PubMed

    Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna

    2016-06-27

    This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.

  14. Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants

    PubMed Central

    Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna

    2016-01-01

    This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated. PMID:27355949

  15. Quantum illumination for enhanced detection of Rayleigh-fading targets

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.

  16. Classical experiments revisited: smartphones and tablet PCs as experimental tools in acoustics and optics

    NASA Astrophysics Data System (ADS)

    Klein, P.; Hirth, M.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-07-01

    Smartphones and tablets are used as experimental tools and for quantitative measurements in two traditional laboratory experiments for undergraduate physics courses. The Doppler effect is analyzed and the speed of sound is determined with an accuracy of about 5% using ultrasonic frequency and two smartphones, which serve as rotating sound emitter and stationary sound detector. Emphasis is put on the investigation of measurement errors in order to judge experimentally derived results and to sensitize undergraduate students to the methods of error estimates. The distance dependence of the illuminance of a light bulb is investigated using an ambient light sensor of a mobile device. Satisfactory results indicate that the spectrum of possible smartphone experiments goes well beyond those already published for mechanics.

  17. A Comparison of Three Types of Test Development Procedures Using Classical and Latent Trait Methods.

    ERIC Educational Resources Information Center

    Benson, Jeri; Wilson, Michael

    Three methods of item selection were used to select sets of 38 items from a 50-item verbal analogies test and the resulting item sets were compared for internal consistency, standard errors of measurement, item difficulty, biserial item-test correlations, and relative efficiency. Three groups of 1,500 cases each were used for item selection. First…

  18. On the Benefits of Latent Variable Modeling for Norming Scales: The Case of the "Supports Intensity Scale-Children's Version"

    ERIC Educational Resources Information Center

    Seo, Hyojeong; Little, Todd D.; Shogren, Karrie A.; Lang, Kyle M.

    2016-01-01

    Structural equation modeling (SEM) is a powerful and flexible analytic tool to model latent constructs and their relations with observed variables and other constructs. SEM applications offer advantages over classical models in dealing with statistical assumptions and in adjusting for measurement error. So far, however, SEM has not been fully used…

  19. Characterizing Measurement Error in Test Scores across Studies: A Tutorial on Conducting "Reliability Generalization" Analyses.

    ERIC Educational Resources Information Center

    Henson, Robin K.; Thompson, Bruce

    Given the potential value of reliability generalization (RG) studies in the development of cumulative psychometric knowledge, the purpose of this paper is to provide a tutorial on how to conduct such studies and to serve as a guide for researchers wishing to use this methodology. After some brief comments on classical test theory, the paper…

  20. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    NASA Astrophysics Data System (ADS)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  1. Analysis and improvement of the quantum image matching

    NASA Astrophysics Data System (ADS)

    Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin

    2017-11-01

    We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.

  2. Magnetometer-augmented IMU simulator: in-depth elaboration.

    PubMed

    Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel

    2015-03-04

    The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests.

  3. Magnetometer-Augmented IMU Simulator: In-Depth Elaboration

    PubMed Central

    Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel

    2015-01-01

    The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests. PMID:25746095

  4. Correcting quantum errors with entanglement.

    PubMed

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  5. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  6. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  7. Quantum-classical boundary for precision optical phase estimation

    NASA Astrophysics Data System (ADS)

    Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo

    2017-12-01

    Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.

  8. Effect of illumination on colour vision testing with Farnsworth-Munsell 100 hue test: customized colour vision booth versus room illumination.

    PubMed

    Zahiruddin, Kowser; Banu, Shaj; Dharmarajan, Ramya; Kulothungan, Vaitheeswaran; Vijayan, Deepa; Raman, Rajiv; Sharma, Tarun

    2010-06-01

    To evaluate a customized, portable Farnsworth-Munsell 100 (FM 100) hue viewing booth for compliance with colour vision testing standards and to compare it with room illumination in subjects with normal colour vision (trichromats), subjects with acquired colour vision defects (secondary to diabetes mellitus), and subjects with congenital colour vision defects (dichromats). Discrete wavelengths of the tube in the customized booth were measured using a spectrometer using the normal incident method and were compared with the spectral distribution of sunlight. Forty-eight subjects were recruited for the study and were divided into 3 groups: Group 1, Normal Trichromats (30 eyes); Group 2, Congenital Colour Vision Defects (16 eyes); and Group 3, Diabetes Mellitus (20 eyes). The FM 100 hue test performance was compared using two illumination conditions, booth illumination and room illumination. Total error scores of the classical method in Group 2 as mean+/-SD for room and booth illumination was 243.05+/-85.96 and 149.85+/-54.50 respectively (p=0.0001). Group 2 demonstrated lesser correlation (r=0.50, 0.55), lesser reliability (Cronbach's alpha, 0.625, 0.662) and greater variability (Bland & Altman value, 10.5) in total error scores for the classical method and the moment of inertia method between the two illumination conditions when compared to the other two groups. The customized booth demonstrated illumination meeting CIE standards. The total error scores were overestimated by the classical and moment of inertia methods in all groups for room illumination compared with booth illumination, however overestimation was more significant in the diabetes group.

  9. A Measurable Difference: Bridge Versus Loop

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Trig-Tek, Inc.'s Model 251A ACL-8 Anderson Current Loop (ACL) Conditioner is an eight channel device designed to condition variable-resistant sensor signals from Strain Gage and RTD's (Resistance Temperature Device)s. It uses NASA's patented ACL technology instead of the classic wheatstone bridge. The electronic measurement circuit delivers accuracy far beyond previous methods and prevents errors caused by variation in the wires that connect sensors to data collection equipment. This is the first license to market a NASA Dryden Flight Research Center patent.

  10. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  11. On the Benefits of Latent Variable Modeling for Norming Scales: The Case of the "Supports Intensity Scale--Children's Version"

    ERIC Educational Resources Information Center

    Seo, Hyojeong; Little, Todd D.; Shogren, Karrie A.; Lang, Kyle M.

    2016-01-01

    Structural equation modeling (SEM) is a powerful and flexible analytic tool to model latent constructs and their relations with observed variables and other constructs. SEM applications offer advantages over classical models in dealing with statistical assumptions and in adjusting for measurement error. So far, however, SEM has not been fully used…

  12. [Discussion on six errors of formulas corresponding to syndromes in using the classic formulas].

    PubMed

    Bao, Yan-ju; Hua, Bao-jin

    2012-12-01

    The theory of formulas corresponding to syndromes is one of the characteristics of Treatise on Cold Damage and Miscellaneous Diseases (Shanghan Zabing Lun) and one of the main principles in applying classic prescriptions. It is important to take effect by following the principle of formulas corresponding to syndromes. However, some medical practitioners always feel that the actual clinical effect is far less than expected. Six errors in the use of classic prescriptions as well as the theory of formulas corresponding to syndromes are the most important causes to be considered, i.e. paying attention only to the local syndromes while neglecting the whole, paying attention only to formulas corresponding to syndromes while neglecting the pathogenesis, paying attention only to syndromes while neglecting the pulse diagnosis, paying attention only to unilateral prescription but neglecting the combined prescriptions, paying attention only to classic prescriptions while neglecting the modern formulas, and paying attention only to the formulas but neglecting the drug dosage. Therefore, not only the patients' clinical syndromes, but also the combination of main syndrome and pathogenesis simultaneously is necessary in the clinical applications of classic prescriptions and the theory of prescription corresponding to syndrome. In addition, comprehensive syndrome differentiation, modern formulas, current prescriptions, combined prescriptions, and drug dosage all contribute to avoid clinical errors and improve clinical effects.

  13. The Quantum Socket: Wiring for Superconducting Qubits - Part 3

    NASA Astrophysics Data System (ADS)

    Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.

  14. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  15. A confirmation of the general relativistic prediction of the Lense-Thirring effect.

    PubMed

    Ciufolini, I; Pavlis, E C

    2004-10-21

    An important early prediction of Einstein's general relativity was the advance of the perihelion of Mercury's orbit, whose measurement provided one of the classical tests of Einstein's theory. The advance of the orbital point-of-closest-approach also applies to a binary pulsar system and to an Earth-orbiting satellite. General relativity also predicts that the rotation of a body like Earth will drag the local inertial frames of reference around it, which will affect the orbit of a satellite. This Lense-Thirring effect has hitherto not been detected with high accuracy, but its detection with an error of about 1 per cent is the main goal of Gravity Probe B--an ongoing space mission using orbiting gyroscopes. Here we report a measurement of the Lense-Thirring effect on two Earth satellites: it is 99 +/- 5 per cent of the value predicted by general relativity; the uncertainty of this measurement includes all known random and systematic errors, but we allow for a total +/- 10 per cent uncertainty to include underestimated and unknown sources of error.

  16. Classical vs. evolved quenching parameters and procedures in scintillation measurements: The importance of ionization quenching

    NASA Astrophysics Data System (ADS)

    Bagán, H.; Tarancón, A.; Rauret, G.; García, J. F.

    2008-07-01

    The quenching parameters used to model detection efficiency variations in scintillation measurements have not evolved since the decade of 1970s. Meanwhile, computer capabilities have increased enormously and ionization quenching has appeared in practical measurements using plastic scintillation. This study compares the results obtained in activity quantification by plastic scintillation of 14C samples that contain colour and ionization quenchers, using classical (SIS, SCR-limited, SCR-non-limited, SIS(ext), SQP(E)) and evolved (MWA-SCR and WDW) parameters and following three calibration approaches: single step, which does not take into account the quenching mechanism; two steps, which takes into account the quenching phenomena; and multivariate calibration. Two-step calibration (ionization followed by colour) yielded the lowest relative errors, which means that each quenching phenomenon must be specifically modelled. In addition, the sample activity was quantified more accurately when the evolved parameters were used. Multivariate calibration-PLS also yielded better results than those obtained using classical parameters, which confirms that the quenching phenomena must be taken into account. The detection limits for each calibration method and each parameter were close to those obtained theoretically using the Currie approach.

  17. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.

  18. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  19. Conservative classical and quantum resolution limits for incoherent imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-06-01

    I propose classical and quantum limits to the statistical resolution of two incoherent optical point sources from the perspective of minimax parameter estimation. Unlike earlier results based on the Cramér-Rao bound (CRB), the limits proposed here, based on the worst-case error criterion and a Bayesian version of the CRB, are valid for any biased or unbiased estimator and obey photon-number scalings that are consistent with the behaviours of actual estimators. These results prove that, from the minimax perspective, the spatial-mode demultiplexing measurement scheme recently proposed by Tsang, Nair, and Lu [Phys. Rev. X 2016, 6 031033.] remains superior to direct imaging for sufficiently high photon numbers.

  20. Study on elevated-temperature flow behavior of Ni-Cr-Mo-B ultra-heavy-plate steel via experiment and modelling

    NASA Astrophysics Data System (ADS)

    Gao, Zhi-yu; Kang, Yu; Li, Yan-shuai; Meng, Chao; Pan, Tao

    2018-04-01

    Elevated-temperature flow behavior of a novel Ni-Cr-Mo-B ultra-heavy-plate steel was investigated by conducting hot compressive deformation tests on a Gleeble-3800 thermo-mechanical simulator at a temperature range of 1123 K–1423 K with a strain rate range from 0.01 s‑1 to10 s‑1 and a height reduction of 70%. Based on the experimental results, classic strain-compensated Arrhenius-type, a new revised strain-compensated Arrhenius-type and classic modified Johnson-Cook constitutive models were developed for predicting the high-temperature deformation behavior of the steel. The predictability of these models were comparatively evaluated in terms of statistical parameters including correlation coefficient (R), average absolute relative error (AARE), average root mean square error (RMSE), normalized mean bias error (NMBE) and relative error. The statistical results indicate that the new revised strain-compensated Arrhenius-type model could give prediction of elevated-temperature flow stress for the steel accurately under the entire process conditions. However, the predicted values by the classic modified Johnson-Cook model could not agree well with the experimental values, and the classic strain-compensated Arrhenius-type model could track the deformation behavior more accurately compared with the modified Johnson-Cook model, but less accurately with the new revised strain-compensated Arrhenius-type model. In addition, reasons of differences in predictability of these models were discussed in detail.

  1. An improved methodology for heliostat testing and evaluation at the Plataforma Solar de Almería

    NASA Astrophysics Data System (ADS)

    Monterreal, Rafael; Enrique, Raúl; Fernández-Reche, Jesús

    2017-06-01

    The optical quality of a heliostat basically quantifies the difference between the scattering effects of the actual solar radiation reflected on its optical surface, compared to the so called canonical dispersion, that is, the one reflected on an optical surface free of constructional errors (paradigm). However, apart from the uncertainties of the measuring process itself, the value of the optical quality must be independent of the measuring instrument; so, any new measuring techniques that provide additional information about the error sources on the heliostat reflecting surface would be welcome. That error sources are responsible for the final optical quality value, with different degrees of influence. For the constructor of heliostats it will be extremely useful to know the value of the classical sources of error and their weight on the overall optical quality of a heliostat, such as facets geometry or focal length, as well as the characteristics of the heliostat as a whole, i.e., its geometry, focal length, facets misalignment and also the possible dependence of these effects with mechanical and/or meteorological factors. It is the goal of the present paper to unfold these optical quality error sources by exploring directly the reflecting surface of the heliostat with the help of a laser-scanner device and link the result with the traditional methods of heliostat evaluation at the Plataforma Solar de Almería.

  2. Topics in quantum cryptography, quantum error correction, and channel simulation

    NASA Astrophysics Data System (ADS)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.

  3. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  4. InGaAs tunnel diodes for the calibration of semi-classical and quantum mechanical band-to-band tunneling models

    NASA Astrophysics Data System (ADS)

    Smets, Quentin; Verreck, Devin; Verhulst, Anne S.; Rooyackers, Rita; Merckling, Clément; Van De Put, Maarten; Simoen, Eddy; Vandervorst, Wilfried; Collaert, Nadine; Thean, Voon Y.; Sorée, Bart; Groeseneken, Guido; Heyns, Marc M.

    2014-05-01

    Promising predictions are made for III-V tunnel-field-effect transistor (FET), but there is still uncertainty on the parameters used in the band-to-band tunneling models. Therefore, two simulators are calibrated in this paper; the first one uses a semi-classical tunneling model based on Kane's formalism, and the second one is a quantum mechanical simulator implemented with an envelope function formalism. The calibration is done for In0.53Ga0.47As using several p+/intrinsic/n+ diodes with different intrinsic region thicknesses. The dopant profile is determined by SIMS and capacitance-voltage measurements. Error bars are used based on statistical and systematic uncertainties in the measurement techniques. The obtained parameters are in close agreement with theoretically predicted values and validate the semi-classical and quantum mechanical models. Finally, the models are applied to predict the input characteristics of In0.53Ga0.47As n- and p-lineTFET, with the n-lineTFET showing competitive performance compared to MOSFET.

  5. Noise management to achieve superiority in quantum information systems

    NASA Astrophysics Data System (ADS)

    Nemoto, Kae; Devitt, Simon; Munro, William J.

    2017-06-01

    Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority. This article is part of the themed issue 'Quantum technology for the 21st century'.

  6. Noise management to achieve superiority in quantum information systems.

    PubMed

    Nemoto, Kae; Devitt, Simon; Munro, William J

    2017-08-06

    Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority.This article is part of the themed issue 'Quantum technology for the 21st century'. © 2017 The Author(s).

  7. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  8. Bayesian inversions of a dynamic vegetation model in four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; François, L.

    2015-01-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.

  9. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  10. Correcting for deformation in skin-based marker systems.

    PubMed

    Alexander, E J; Andriacchi, T P

    2001-03-01

    A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.

  11. Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    2017-06-14

    There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less

  12. Focusing in Arthurs-Kelly-type joint measurements with correlated probes.

    PubMed

    Bullock, Thomas J; Busch, Paul

    2014-09-19

    Joint approximate measurement schemes of position and momentum provide us with a means of inferring pieces of complementary information if we allow for the irreducible noise required by quantum theory. One such scheme is given by the Arthurs-Kelly model, where information about a system is extracted via indirect probe measurements, assuming separable uncorrelated probes. Here, following Di Lorenzo [Phys. Rev. Lett. 110, 120403 (2013)], we extend this model to both entangled and classically correlated probes, achieving full generality. We show that correlated probes can produce more precise joint measurement outcomes than the same probes can achieve if applied alone to realize a position or momentum measurement. This phenomenon of focusing may be useful where one tries to optimize measurements with limited physical resources. Contrary to Di Lorenzo's claim, we find that there are no violations of Heisenberg's error-disturbance relation in these generalized Arthurs-Kelly models. This is simply due to the fact that, as we show, the measured observable of the system under consideration is covariant under phase space translations and as such is known to obey a tight joint measurement error relation.

  13. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987

  14. Dysfunctional error-related processing in female psychopathy

    PubMed Central

    Steele, Vaughn R.; Edwards, Bethany G.; Bernat, Edward M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Neurocognitive studies of psychopathy have predominantly focused on male samples. Studies have shown that female psychopaths exhibit similar affective deficits as their male counterparts, but results are less consistent across cognitive domains including response modulation. As such, there may be potential gender differences in error-related processing in psychopathic personality. Here we investigate response-locked event-related potential (ERP) components [the error-related negativity (ERN/Ne) related to early error-detection processes and the error-related positivity (Pe) involved in later post-error processing] in a sample of incarcerated adult female offenders (n = 121) who performed a response inhibition Go/NoGo task. Psychopathy was assessed using the Hare Psychopathy Checklist-Revised (PCL-R). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Consistent with previous research performed in psychopathic males, female psychopaths exhibited specific deficiencies in the neural correlates of post-error processing (as indexed by reduced Pe amplitude) but not in error monitoring (as indexed by intact ERN/Ne amplitude). Specifically, psychopathic traits reflecting interpersonal and affective dysfunction remained significant predictors of both time-domain and PCA measures reflecting reduced Pe mean amplitude. This is the first evidence to suggest that incarcerated female psychopaths exhibit similar dysfunctional post-error processing as male psychopaths. PMID:26060326

  15. Contextual Advantage for State Discrimination

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.

    2018-02-01

    Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.

  16. Impossibility of Classically Simulating One-Clean-Qubit Model with Multiplicative Error

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Kobayashi, Hirotada; Morimae, Tomoyuki; Nishimura, Harumichi; Tamate, Shuhei; Tani, Seiichiro

    2018-05-01

    The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently sampled within a constant multiplicative error unless the polynomial-time hierarchy collapses to the third level [T. Morimae, K. Fujii, and J. F. Fitzsimons, Phys. Rev. Lett. 112, 130502 (2014), 10.1103/PhysRevLett.112.130502]. It was open whether we can keep the no-go result while reducing the number of output qubits from three to one. Here, we solve the open problem affirmatively. We also show that the third-level collapse of the polynomial-time hierarchy can be strengthened to the second-level one. The strengthening of the collapse level from the third to the second also holds for other subuniversal models such as the instantaneous quantum polynomial model [M. Bremner, R. Jozsa, and D. J. Shepherd, Proc. R. Soc. A 467, 459 (2011), 10.1098/rspa.2010.0301] and the boson sampling model [S. Aaronson and A. Arkhipov, STOC 2011, p. 333]. We additionally study the classical simulatability of the one-clean-qubit model with further restrictions on the circuit depth or the gate types.

  17. Merging bottom-up and top-down precipitation products using a stochastic error model

    NASA Astrophysics Data System (ADS)

    Maggioni, Viviana; Massari, Christian; Brocca, Luca; Ciabatta, Luca

    2017-04-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season etc…). Recently, Brocca et al. (2014) have proposed an alternative approach (i.e., SM2RAIN) that allows to estimate rainfall from space by using satellite soil moisture observations. In contrast with classical satellite precipitation products which sense the cloud properties to retrieve the instantaneous precipitation, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite passes. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to improve current satellite rainfall estimates via appropriate integration between the products (i.e., SM2RAIN plus a classical satellite rainfall product). However, whether SM2RAIN is able or not to improve the performance of any state-of-the-art satellite rainfall product is much dependent upon an adequate quantification and characterization of the relative errors of the products. In this study, the stochastic rainfall error model SREM2D (Hossain et al. 2006) is used for characterizing the retrieval error of both SM2RAIN and a state-of-the-art satellite precipitation product (i.e., 3B42RT). The error characterization serves for an optimal integration between SM2RAIN and 3B42RT for enhancing the capability of the resulting integrated product (i.e. SM2RAIN+3B42RT) in operational hydrology. The study, conducted in Italy for a 5-yr period (2010-2014) using a dense network of raingauges (about 3000) as a benchmark, demonstrates that the integration is able to enhance the correlation and the root mean squared error of SM2RAIN+3B42RT with respect to the parent products. This suggests a potential benefit of merging SM2RAIN derived rainfall with state-of-the-art satellite precipitation estimates for creating a product characterized by higher accuracy and better performance when used in the contest of operational hydrology. REFERENCES 1. Brocca, L.; Ciabatta, L.; Massari, C.; Moramarco, T.; Hahn, S.; Hasenauer, S.; Kidd, R.; Dorigo, W.; Wagner, W.; Levizzani, V. Soil as a natural rain gauge: Estimating global rainfall from satellite soil moisture data. J. Geophys. Res. Atmos. 2014, 119, 5128-5141. 2. Hossain, F.; Anagnostou, E. N. A two-dimensional satellite rainfall error model. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1511-1522.

  18. SPSS and SAS programs for generalizability theory analyses.

    PubMed

    Mushquash, Christopher; O'Connor, Brian P

    2006-08-01

    The identification and reduction of measurement errors is a major challenge in psychological testing. Most investigators rely solely on classical test theory for assessing reliability, whereas most experts have long recommended using generalizability theory instead. One reason for the common neglect of generalizability theory is the absence of analytic facilities for this purpose in popular statistical software packages. This article provides a brief introduction to generalizability theory, describes easy to use SPSS, SAS, and MATLAB programs for conducting the recommended analyses, and provides an illustrative example, using data (N = 329) for the Rosenberg Self-Esteem Scale. Program output includes variance components, relative and absolute errors and generalizability coefficients, coefficients for D studies, and graphs of D study results.

  19. Quantum proofs can be verified using only single-qubit measurements

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Nagaj, Daniel; Schuch, Norbert

    2016-02-01

    Quantum Merlin Arthur (QMA) is the class of problems which, though potentially hard to solve, have a quantum solution that can be verified efficiently using a quantum computer. It thus forms a natural quantum version of the classical complexity class NP (and its probabilistic variant MA, Merlin-Arthur games), where the verifier has only classical computational resources. In this paper, we study what happens when we restrict the quantum resources of the verifier to the bare minimum: individual measurements on single qubits received as they come, one by one. We find that despite this grave restriction, it is still possible to soundly verify any problem in QMA for the verifier with the minimum quantum resources possible, without using any quantum memory or multiqubit operations. We provide two independent proofs of this fact, based on measurement-based quantum computation and the local Hamiltonian problem. The former construction also applies to QMA1, i.e., QMA with one-sided error.

  20. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

    NASA Technical Reports Server (NTRS)

    Doremus, R. H.

    1982-01-01

    It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

  1. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Security of quantum key distribution with multiphoton components

    PubMed Central

    Yin, Hua-Lei; Fu, Yao; Mao, Yingqiu; Chen, Zeng-Bing

    2016-01-01

    Most qubit-based quantum key distribution (QKD) protocols extract the secure key merely from single-photon component of the attenuated lasers. However, with the Scarani-Acin-Ribordy-Gisin 2004 (SARG04) QKD protocol, the unconditionally secure key can be extracted from the two-photon component by modifying the classical post-processing procedure in the BB84 protocol. Employing the merits of SARG04 QKD protocol and six-state preparation, one can extract secure key from the components of single photon up to four photons. In this paper, we provide the exact relations between the secure key rate and the bit error rate in a six-state SARG04 protocol with single-photon, two-photon, three-photon, and four-photon sources. By restricting the mutual information between the phase error and bit error, we obtain a higher secure bit error rate threshold of the multiphoton components than previous works. Besides, we compare the performances of the six-state SARG04 with other prepare-and-measure QKD protocols using decoy states. PMID:27383014

  3. Verifiable fault tolerance in measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Hayashi, Masahito

    2017-09-01

    Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.

  4. Evolution of modern approaches to express uncertainty in measurement

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu; Sommer, Klaus-Dieter; Kessel, Rüdiger

    2007-12-01

    An object of this paper is to discuss the logical development of the concept of uncertainty in measurement and the methods for its quantification from the classical error analysis to the modern approaches based on the Guide to the Expression of Uncertainty in Measurement (GUM). We review authoritative literature on error analysis and then discuss its limitations which motivated the experts from the International Committee for Weights and Measures (CIPM), the International Bureau of Weights and Measures (BIPM) and various national metrology institutes to develop specific recommendations which form the basis of the GUM. We discuss the new concepts introduced by the GUM and their merits and limitations. The limitations of the GUM led the BIPM Joint Committee on Guides in Metrology to develop an alternative approach—the draft Supplement 1 to the GUM (draft GUM-S1). We discuss the draft GUM-S1 and its merits and limitations. We hope this discussion will lead to a more effective use of the GUM and the draft GUM-S1 and stimulate investigations leading to further improvements in the methods to quantify uncertainty in measurement.

  5. Rasch-family models are more valuable than score-based approaches for analysing longitudinal patient-reported outcomes with missing data.

    PubMed

    de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique

    2016-10-01

    The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.

  6. Some conservative estimates in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2006-08-15

    Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.

  7. One-dimensional angular-measurement-based stitching interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    In this paper, we present one-dimensional stitching interferometry based on the angular measurement for high-precision mirror metrology. The tilt error introduced by the stage motion during the stitching process is measured by an extra angular measurement device. The local profile measured by the interferometer in a single field of view is corrected using the measured angle before the piston adjustment in the stitching process. Comparing to the classical software stitching technique, the angle measuring stitching technique is more reliable and accurate in profiling mirror surface at the nanometer level. Experimental results demonstrate the feasibility of the proposed stitching technique. Basedmore » on our measurements, the typical repeatability within 200 mm scanning range is 0.5 nm RMS or less.« less

  8. One-dimensional angular-measurement-based stitching interferometry

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2018-04-05

    In this paper, we present one-dimensional stitching interferometry based on the angular measurement for high-precision mirror metrology. The tilt error introduced by the stage motion during the stitching process is measured by an extra angular measurement device. The local profile measured by the interferometer in a single field of view is corrected using the measured angle before the piston adjustment in the stitching process. Comparing to the classical software stitching technique, the angle measuring stitching technique is more reliable and accurate in profiling mirror surface at the nanometer level. Experimental results demonstrate the feasibility of the proposed stitching technique. Basedmore » on our measurements, the typical repeatability within 200 mm scanning range is 0.5 nm RMS or less.« less

  9. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  10. The Soil Sink for Nitrous Oxide: Trivial Amount but Challenging Question

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.; Savage, K. E.; Sihi, D.

    2015-12-01

    Net uptake of atmospheric nitrous oxide (N2O) has been observed sporadically for many years. Such observations have often been discounted as measurement error or noise, but they were reported frequently enough to gain some acceptance as valid. The advent of fast response field instruments with good sensitivity and precision has permitted confirmation that some soils can be small sinks of N2O. With regards to "closing the global N2O budget" the soil sink is trivial, because it is smaller than the error terms of most other budget components. Although not important from a global budget perspective, the existence of a soil sink for atmospheric N2O presents a fascinating challenge for understanding the physical, chemical, and biological processes that explain the sink. Reduction of N2O by classical biological denitrification requires reducing conditions generally found in wet soil, and yet we have measured the N2O sink in well drained soils, where we also simultaneously measure a sink for atmospheric methane (CH4). Co-occurrence of N2O reduction and CH4 oxidation would require a broad range of microsite conditions within the soil, spanning high and low oxygen concentrations. Abiotic sinks for N2O or other biological processes that consume N2O could exist, but have not yet been identified. We are attempting to simulate processes of diffusion of N2O, CH4, and O2 from the atmosphere and within a soil profile to determine if classical biological N2O reduction and CH4 oxidation at rates consistent with measured fluxes are plausible.

  11. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  12. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  13. Adaptive output feedback control of uncertain nonlinear systems using single-hidden-layer neural networks.

    PubMed

    Hovakimyan, N; Nardi, F; Calise, A; Kim, Nakwan

    2002-01-01

    We consider adaptive output feedback control of uncertain nonlinear systems, in which both the dynamics and the dimension of the regulated system may be unknown. However, the relative degree of the regulated output is assumed to be known. Given a smooth reference trajectory, the problem is to design a controller that forces the system measurement to track it with bounded errors. The classical approach requires a state observer. Finding a good observer for an uncertain nonlinear system is not an obvious task. We argue that it is sufficient to build an observer for the output tracking error. Ultimate boundedness of the error signals is shown through Lyapunov's direct method. The theoretical results are illustrated in the design of a controller for a fourth-order nonlinear system of relative degree two and a high-bandwidth attitude command system for a model R-50 helicopter.

  14. Optimal subsystem approach to multi-qubit quantum state discrimination and experimental investigation

    NASA Astrophysics Data System (ADS)

    Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun

    2018-02-01

    Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.

  15. Delta13C and delta18O isotopic composition of CaCO3 measured by continuous flow isotope ratio mass spectrometry: statistical evaluation and verification by application to Devils Hole core DH-11 calcite.

    PubMed

    Révész, Kinga M; Landwehr, Jurate M

    2002-01-01

    A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 +/- 20 micro g) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H(3)PO(4)/CaCO(3)) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H(3)PO(4)/CaCO(3) reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 degrees C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was

  16. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  17. Simultaneous estimation of aquifer thickness, conductivity, and BC using borehole and hydrodynamic data with geostatistical inverse direct method

    NASA Astrophysics Data System (ADS)

    Gao, F.; Zhang, Y.

    2017-12-01

    A new inverse method is developed to simultaneously estimate aquifer thickness and boundary conditions using borehole and hydrodynamic measurements from a homogeneous confined aquifer under steady-state ambient flow. This method extends a previous groundwater inversion technique which had assumed known aquifer geometry and thickness. In this research, thickness inversion was successfully demonstrated when hydrodynamic data were supplemented with measured thicknesses from boreholes. Based on a set of hybrid formulations which describe approximate solutions to the groundwater flow equation, the new inversion technique can incorporate noisy observed data (i.e., thicknesses, hydraulic heads, Darcy fluxes or flow rates) at measurement locations as a set of conditioning constraints. Given sufficient quantity and quality of the measurements, the inverse method yields a single well-posed system of equations that can be solved efficiently with nonlinear optimization. The method is successfully tested on two-dimensional synthetic aquifer problems with regular geometries. The solution is stable when measurement errors are increased, with error magnitude reaching up to +/- 10% of the range of the respective measurement. When error-free observed data are used to condition the inversion, the estimated thickness is within a +/- 5% error envelope surrounding the true value; when data contain increasing errors, the estimated thickness become less accurate, as expected. Different combinations of measurement types are then investigated to evaluate data worth. Thickness can be inverted with the combination of observed heads and at least one of the other types of observations such as thickness, Darcy fluxes, or flow rates. Data requirement of the new inversion method is thus not much different from that of interpreting classic well tests. Future work will improve upon this research by developing an estimation strategy for heterogeneous aquifers while drawdown data from hydraulic tests will also be incorporated as conditioning measurements.

  18. Aniseikonia quantification: error rate of rule of thumb estimation.

    PubMed

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  19. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  20. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  1. Uncertainty Propagation in an Ecosystem Nutrient Budget.

    EPA Science Inventory

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...

  2. Measurement of latent cognitive abilities involved in concept identification learning.

    PubMed

    Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B

    2015-01-01

    We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.

  3. Quantum memory receiver for superadditive communication using binary coherent states

    NASA Astrophysics Data System (ADS)

    Klimek, Aleksandra; Jachura, Michał; Wasilewski, Wojciech; Banaszek, Konrad

    2016-11-01

    We propose a simple architecture based on multimode quantum memories for collective readout of classical information keyed using a pair coherent states, exemplified by the well-known binary phase shift keying format. Such a configuration enables demonstration of the superadditivity effect in classical communication over quantum channels, where the transmission rate becomes enhanced through joint detection applied to multiple channel uses. The proposed scheme relies on the recently introduced idea to prepare Hadamard sequences of input symbols that are mapped by a linear optical transformation onto the pulse position modulation format [Guha, S. Phys. Rev. Lett. 2011, 106, 240502]. We analyze two versions of readout based on direct detection and an optional Dolinar receiver which implements the minimum-error measurement for individual detection of a binary coherent state alphabet.

  4. Quantum memory receiver for superadditive communication using binary coherent states.

    PubMed

    Klimek, Aleksandra; Jachura, Michał; Wasilewski, Wojciech; Banaszek, Konrad

    2016-11-12

    We propose a simple architecture based on multimode quantum memories for collective readout of classical information keyed using a pair coherent states, exemplified by the well-known binary phase shift keying format. Such a configuration enables demonstration of the superadditivity effect in classical communication over quantum channels, where the transmission rate becomes enhanced through joint detection applied to multiple channel uses. The proposed scheme relies on the recently introduced idea to prepare Hadamard sequences of input symbols that are mapped by a linear optical transformation onto the pulse position modulation format [Guha, S. Phys. Rev. Lett. 2011 , 106 , 240502]. We analyze two versions of readout based on direct detection and an optional Dolinar receiver which implements the minimum-error measurement for individual detection of a binary coherent state alphabet.

  5. Intraindividual variability in inhibitory function in adults with ADHD--an ex-Gaussian approach.

    PubMed

    Gmehlin, Dennis; Fuermaier, Anselm B M; Walther, Stephan; Debelak, Rudolf; Rentrop, Mirjam; Westermann, Celina; Sharma, Anuradha; Tucha, Lara; Koerts, Janneke; Tucha, Oliver; Weisbrod, Matthias; Aschenbrenner, Steffen

    2014-01-01

    Attention deficit disorder (ADHD) is commonly associated with inhibitory dysfunction contributing to typical behavioral symptoms like impulsivity or hyperactivity. However, some studies analyzing intraindividual variability (IIV) of reaction times in children with ADHD (cADHD) question a predominance of inhibitory deficits. IIV is a measure of the stability of information processing and provides evidence that longer reaction times (RT) in inhibitory tasks in cADHD are due to only a few prolonged responses which may indicate deficits in sustained attention rather than inhibitory dysfunction. We wanted to find out, whether a slowing in inhibitory functioning in adults with ADHD (aADHD) is due to isolated slow responses. Computing classical RT measures (mean RT, SD), ex-Gaussian parameters of IIV (which allow a better separation of reaction time (mu), variability (sigma) and abnormally slow responses (tau) than classical measures) as well as errors of omission and commission, we examined response inhibition in a well-established GoNogo task in a sample of aADHD subjects without medication and healthy controls matched for age, gender and education. We did not find higher numbers of commission errors in aADHD, while the number of omissions was significantly increased compared with controls. In contrast to increased mean RT, the distributional parameter mu did not document a significant slowing in aADHD. However, subjects with aADHD were characterized by increased IIV throughout the entire RT distribution as indicated by the parameters sigma and tau as well as the SD of reaction time. Moreover, we found a significant correlation between tau and the number of omission errors. Our findings question a primacy of inhibitory deficits in aADHD and provide evidence for attentional dysfunction. The present findings may have theoretical implications for etiological models of ADHD as well as more practical implications for neuropsychological testing in aADHD.

  6. Intraindividual Variability in Inhibitory Function in Adults with ADHD – An Ex-Gaussian Approach

    PubMed Central

    Gmehlin, Dennis; Fuermaier, Anselm B. M.; Walther, Stephan; Debelak, Rudolf; Rentrop, Mirjam; Westermann, Celina; Sharma, Anuradha; Tucha, Lara; Koerts, Janneke; Tucha, Oliver; Weisbrod, Matthias; Aschenbrenner, Steffen

    2014-01-01

    Objective Attention deficit disorder (ADHD) is commonly associated with inhibitory dysfunction contributing to typical behavioral symptoms like impulsivity or hyperactivity. However, some studies analyzing intraindividual variability (IIV) of reaction times in children with ADHD (cADHD) question a predominance of inhibitory deficits. IIV is a measure of the stability of information processing and provides evidence that longer reaction times (RT) in inhibitory tasks in cADHD are due to only a few prolonged responses which may indicate deficits in sustained attention rather than inhibitory dysfunction. We wanted to find out, whether a slowing in inhibitory functioning in adults with ADHD (aADHD) is due to isolated slow responses. Methods Computing classical RT measures (mean RT, SD), ex-Gaussian parameters of IIV (which allow a better separation of reaction time (mu), variability (sigma) and abnormally slow responses (tau) than classical measures) as well as errors of omission and commission, we examined response inhibition in a well-established GoNogo task in a sample of aADHD subjects without medication and healthy controls matched for age, gender and education. Results We did not find higher numbers of commission errors in aADHD, while the number of omissions was significantly increased compared with controls. In contrast to increased mean RT, the distributional parameter mu did not document a significant slowing in aADHD. However, subjects with aADHD were characterized by increased IIV throughout the entire RT distribution as indicated by the parameters sigma and tau as well as the SD of reaction time. Moreover, we found a significant correlation between tau and the number of omission errors. Conclusions Our findings question a primacy of inhibitory deficits in aADHD and provide evidence for attentional dysfunction. The present findings may have theoretical implications for etiological models of ADHD as well as more practical implications for neuropsychological testing in aADHD. PMID:25479234

  7. Zero Thermal Noise in Resistors at Zero Temperature

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2016-06-01

    The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.

  8. A fiber Bragg grating sensor system for estimating the large deflection of a lightweight flexible beam

    NASA Astrophysics Data System (ADS)

    Peng, Te; Yang, Yangyang; Ma, Lina; Yang, Huayong

    2016-10-01

    A sensor system based on fiber Bragg grating (FBG) is presented which is to estimate the deflection of a lightweight flexible beam, including the tip position and the tip rotation angle. In this paper, the classical problem of the deflection of a lightweight flexible beam of linear elastic material is analysed. We present the differential equation governing the behavior of a physical system and show that this equation although straightforward in appearance, is in fact rather difficult to solve due to the presence of a non-linear term. We used epoxy glue to attach the FBG sensors to specific locations upper and lower surface of the beam in order to measure local strain measurements. A quasi-distributed FBG static strain sensor network is designed and established. The estimation results from FBG sensors are also compared to reference displacements from the ANSYS simulation results and the experimental results obtained in the laboratory in the static case. The errors of the estimation by FBG sensors are analysed for further error-correction and option-design. When the load weight is 20g, the precision is the highest, the position errors ex and ex are 0.19%, 0.14% respectively, the rotation error eθ, is 1.23%.

  9. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  10. Bayesian inversions of a dynamic vegetation model at four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; Francois, L.

    2015-05-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB (CARbon Assimilation In the Biosphere) dynamic vegetation model (DVM) with 10 unknown parameters, using the DREAM(ZS) (DiffeRential Evolution Adaptive Metropolis) Markov chain Monte Carlo (MCMC) sampler. We focus on comparing model inversions, considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a priori or jointly inferred together with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root mean square errors (RMSEs) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19, 1.04 to 1.56 g C m-2 day-1 and 0.50 to 1.28 mm day-1, respectively. For the calibration period, using a homoscedastic eddy covariance residual error model resulted in a better agreement between measured and modelled data than using a heteroscedastic residual error model. However, a model validation experiment showed that CARAIB models calibrated considering heteroscedastic residual errors perform better. Posterior parameter distributions derived from using a heteroscedastic model of the residuals thus appear to be more robust. This is the case even though the classical linear heteroscedastic error model assumed herein did not fully remove heteroscedasticity of the GPP residuals. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides the residual error treatment, differences between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics.

  11. Evolutionary dynamics of a smoothed war of attrition game.

    PubMed

    Iyer, Swami; Killingback, Timothy

    2016-05-07

    In evolutionary game theory the War of Attrition game is intended to model animal contests which are decided by non-aggressive behavior, such as the length of time that a participant will persist in the contest. The classical War of Attrition game assumes that no errors are made in the implementation of an animal׳s strategy. However, it is inevitable in reality that such errors must sometimes occur. Here we introduce an extension of the classical War of Attrition game which includes the effect of errors in the implementation of an individual׳s strategy. This extension of the classical game has the important feature that the payoff is continuous, and as a consequence admits evolutionary behavior that is fundamentally different from that possible in the original game. We study the evolutionary dynamics of this new game in well-mixed populations both analytically using adaptive dynamics and through individual-based simulations, and show that there are a variety of possible outcomes, including simple monomorphic or dimorphic configurations which are evolutionarily stable and cannot occur in the classical War of Attrition game. In addition, we study the evolutionary dynamics of this extended game in a variety of spatially and socially structured populations, as represented by different complex network topologies, and show that similar outcomes can also occur in these situations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Absolute emission cross sections for electron capture reactions of C2+, N3+, N4+ and O3+ ions in collisions with Li(2s) atoms

    NASA Astrophysics Data System (ADS)

    Rieger, G.; Pinnington, E. H.; Ciubotariu, C.

    2000-12-01

    Absolute photon emission cross sections following electron capture reactions have been measured for C2+, N3+, N4+ and O3+ ions colliding with Li(2s) atoms at keV energies. The results are compared with calculations using the extended classical over-the-barrier model by Niehaus. We explore the limits of our experimental method and present a detailed discussion of experimental errors.

  13. Fisher information and asymptotic normality in system identification for quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guta, Madalin

    2011-06-15

    This paper deals with the problem of estimating the coupling constant {theta} of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of {theta} whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolutemore » bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of {theta}=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.« less

  14. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  15. The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model.

    NASA Astrophysics Data System (ADS)

    Wan, S.; He, W.

    2016-12-01

    The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  16. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  17. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  18. Some effects of finite spatial resolution on skin friction measurements in turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Westphal, Russell V.

    1988-01-01

    The effects of finite spatial resolution often cause serious errors in measurements in turbulent boundary layers, with particularly large effects for measurements of fluctuating skin friction and velocities within the sublayer. However, classical analyses of finite spatial resolution effects have generally not accounted for the substantial inhomogeneity and anisotropy of near-wall turbulence. The present study has made use of results from recent computational simulations of wall-bounded turbulent flows to examine spatial resolution effects for measurements made at a wall using both single-sensor probes and those employing two sensing volumes in a V shape. Results are presented to show the effects of finite spatial resolution on a variety of quantitites deduced from the skin friction field.

  19. Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

    PubMed

    Hinton-Bayre, Anton D

    2011-02-01

    There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.

  20. Intrinsic measurement errors for the speed of light in vacuum

    NASA Astrophysics Data System (ADS)

    Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.

    2017-09-01

    The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.

  1. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  2. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  3. Residential magnetic fields predicted from wiring configurations: II. Relationships To childhood leukemia.

    PubMed

    Thomas, D C; Bowman, J D; Jiang, L; Jiang, F; Peters, J M

    1999-10-01

    Case-control data on childhood leukemia in Los Angeles County were reanalyzed with residential magnetic fields predicted from the wiring configurations of nearby transmission and distribution lines. As described in a companion paper, the 24-h means of the magnetic field's magnitude in subjects' homes were predicted by a physically based regression model that had been fitted to 24-h measurements and wiring data. In addition, magnetic field exposures were adjusted for the most likely form of exposure assessment errors: classic errors for the 24-h measurements and Berkson errors for the predictions from wire configurations. Although the measured fields had no association with childhood leukemia (P for trend=.88), the risks were significant for predicted magnetic fields above 1.25 mG (odds ratio=2.00, 95% confidence interval=1.03-3.89), and a significant dose-response was seen (P for trend=.02). When exposures were determined by a combination of predictions and measurements that corrects for errors, the odds ratio (odd ratio=2.19, 95% confidence interval=1.12-4.31) and the trend (p =.007) showed somewhat greater significance. These findings support the hypothesis that magnetic fields from electrical lines are causally related to childhood leukemia but that this association has been inconsistent among epidemiologic studies due to different types of exposure assessment error. In these data, the leukemia risks from a child's residential magnetic field exposure appears to be better assessed by wire configurations than by 24-h area measurements. However, the predicted fields only partially account for the effect of the Wertheimer-Leeper wire code in a multivariate analysis and do not completely explain why these wire codes have been so often associated with childhood leukemia. The most plausible explanation for our findings is that the causal factor is another magnetic field exposure metric correlated to both wire code and the field's time-averaged magnitude. Copyright 1999 Wiley-Liss, Inc.

  4. Novel approaches to estimating the turbulent kinetic energy dissipation rate from low- and moderate-resolution velocity fluctuation time series

    NASA Astrophysics Data System (ADS)

    Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.

    2017-11-01

    In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.

  5. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  6. Quantum and classical noise in practical quantum-cryptography systems based on polarization-entangled photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castelletto, S.; Degiovanni, I.P.; Rastello, M.L.

    2003-02-01

    Quantum-cryptography key distribution (QCKD) experiments have been recently reported using polarization-entangled photons. However, in any practical realization, quantum systems suffer from either unwanted or induced interactions with the environment and the quantum measurement system, showing up as quantum and, ultimately, statistical noise. In this paper, we investigate how an ideal polarization entanglement in spontaneous parametric down-conversion (SPDC) suffers quantum noise in its practical implementation as a secure quantum system, yielding errors in the transmitted bit sequence. Since all SPDC-based QCKD schemes rely on the measurement of coincidence to assert the bit transmission between the two parties, we bundle up themore » overall quantum and statistical noise in an exhaustive model to calculate the accidental coincidences. This model predicts the quantum-bit error rate and the sifted key and allows comparisons between different security criteria of the hitherto proposed QCKD protocols, resulting in an objective assessment of performances and advantages of different systems.« less

  7. δ13C and δ18O isotopic composition of CaCO3 measured by continuous flow isotope ratio mass spectrometry: statistical evaluation and verification by application to Devils Hole core DH-11 calcite

    USGS Publications Warehouse

    Revesz, Kinga M.; Landwehr, Jurate M.

    2002-01-01

    A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 ± 20 µg) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H3PO4/CaCO3) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H3PO4/CaCO3 reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 °C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was ≤0.1 and ≤0.2 per mill or ‰, respectively, although later analysis showed that materials from one specific standard required reaction time between 34 and 54 h for δ18O to achieve this level of precision. Aliquot screening methods were shown to further minimize the total error. The accuracy and precision of the new method were analyzed and confirmed by statistical analysis. The utility of the method was verified by analyzing calcite from Devils Hole, Nevada, for which isotope-ratio values had previously been obtained by the classical method. Devils Hole core DH-11 recently had been re-cut and re-sampled, and isotope-ratio values were obtained using the new method. The results were comparable with those obtained by the classical method with correlation = +0.96 for both isotope ratios. The consistency of the isotopic results is such that an alignment offset could be identified in the re-sampled core material, and two cutting errors that occurred during re-sampling then were confirmed independently. This result indicates that the new method is a viable alternative to the classical reaction method. In particular, the new method requires less sample material permitting finer resolution and allows automation of some processes resulting in considerable time savings. 

  8. Signed reward prediction errors drive declarative learning

    PubMed Central

    Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493

  9. Signed reward prediction errors drive declarative learning.

    PubMed

    De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.

  10. Quantifying Human Response: Linking metrological and psychometric characterisations of Man as a Measurement Instrument

    NASA Astrophysics Data System (ADS)

    Pendrill, L. R.; Fisher, William P., Jr.

    2013-09-01

    A better understanding of how to characterise human response is essential to improved person-centred care and other situations where human factors are crucial. Challenges to introducing classical metrological concepts such as measurement uncertainty and traceability when characterising Man as a Measurement Instrument include the failure of many statistical tools when applied to ordinal measurement scales and a lack of metrological references in, for instance, healthcare. The present work attempts to link metrological and psychometric (Rasch) characterisation of Man as a Measurement Instrument in a study of elementary tasks, such as counting dots, where one knows independently the expected value because the measurement object (collection of dots) is prepared in advance. The analysis is compared and contrasted with recent approaches to this problem by others, for instance using signal error fidelity.

  11. RR Lyrae stars in eclipsing systems -- historical candidates

    NASA Astrophysics Data System (ADS)

    Liška, J.; Skarka, M.; Hájková, P.; Auer, R. F.

    2016-03-01

    Discovery of binary systems among RR Lyrae stars belongs to challenges of present astronomy. So far, none of classical RR Lyrae stars was clearly confirmed, that it is a part of an eclipsing system. From this reason we studied two RR Lyrae stars, VX Her and RW Ari, in which changes assigned to eclipses were detected in sixties and seventies of the 20th century. In this paper our preliminary results based on analysis of new photometric measurements are presented as well as the results from the detailed analysis of original measurements. A new possible eclipsing system, RZ Cet was identified in the archive data. Our analysis rather indicates errors in measurements and reductions of the old data than real changes for all three stars.

  12. Secure Communication via a Recycling of Attenuated Classical Signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, IV, Amos M.

    We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.

  13. Secure Communication via a Recycling of Attenuated Classical Signals

    DOE PAGES

    Smith, IV, Amos M.

    2017-01-12

    We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.

  14. Word-Synchronous Optical Sampling of Periodically Repeated OTDM Data Words for True Waveform Visualization

    NASA Astrophysics Data System (ADS)

    Benkler, Erik; Telle, Harald R.

    2007-06-01

    An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.

  15. Surface code quantum communication.

    PubMed

    Fowler, Austin G; Wang, David S; Hill, Charles D; Ladd, Thaddeus D; Van Meter, Rodney; Hollenberg, Lloyd C L

    2010-05-07

    Quantum communication typically involves a linear chain of repeater stations, each capable of reliable local quantum computation and connected to their nearest neighbors by unreliable communication links. The communication rate of existing protocols is low as two-way classical communication is used. By using a surface code across the repeater chain and generating Bell pairs between neighboring stations with probability of heralded success greater than 0.65 and fidelity greater than 0.96, we show that two-way communication can be avoided and quantum information can be sent over arbitrary distances with arbitrarily low error at a rate limited only by the local gate speed. This is achieved by using the unreliable Bell pairs to measure nonlocal stabilizers and feeding heralded failure information into post-transmission error correction. Our scheme also applies when the probability of heralded success is arbitrarily low.

  16. One-dimensional stitching interferometry assisted by a triple-beam interferometer

    DOE PAGES

    Xue, Junpeng; Huang, Lei; Gao, Bo; ...

    2017-04-13

    In this work, we proposed for stitching interferometry to use a triple-beam interferometer to measure both the distance and the tilt for all sub-apertures before the stitching process. The relative piston between two neighboring sub-apertures is then calculated by using the data in the overlapping area. Comparisons are made between our method, and the classical least-squares principle stitching method. Our method can improve the accuracy and repeatability of the classical stitching method when a large number of sub-aperture topographies are taken into account. Our simulations and experiments on flat and spherical mirrors indicate that our proposed method can decrease themore » influence of the interferometer error from the stitched result. The comparison of stitching system with Fizeau interferometry data is about 2 nm root mean squares and the repeatability is within ± 2.5 nm peak to valley.« less

  17. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  18. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.

  19. Determination of layer ordering using sliding-window Fourier transform of x-ray reflectivity data

    NASA Astrophysics Data System (ADS)

    Smigiel, E.; Knoll, A.; Broll, N.; Cornet, A.

    1998-01-01

    X-ray reflectometry allows the determination of the thickness, density and roughness of thin layers on a substrate from several Angstroms to some hundred nanometres. The thickness is determined by simulation with trial-and-error methods after extracting initial values of the layer thicknesses from the result of a classical Fast Fourier Transform (FFT) of the reflectivity data. However, the order information of the layers is lost during classical FFT. The order of the layers has then to be known a priori. In this paper, it will be shown that the order of the layers can be obtained by a sliding-window Fourier transform, the so-called Gabor representation. This joint time-frequency analysis allows the direct determination of the order of the layers and, therefore, the use of a more appropriate starting model for refining simulations. A simulated and a measured example show the interest of this method.

  20. Balancing the books - a statistical theory of prospective budgets in Earth System science

    NASA Astrophysics Data System (ADS)

    O'Kane, J. Philip

    An honest declaration of the error in a mass, momentum or energy balance, ɛ, simply raises the question of its acceptability: "At what value of ɛ is the attempted balance to be rejected?" Answering this question requires a reference quantity against which to compare ɛ. This quantity must be a mathematical function of all the data used in making the balance. To deliver this function, a theory grounded in a workable definition of acceptability is essential. A distinction must be drawn between a retrospective balance and a prospective budget in relation to any natural space-filling body. Balances look to the past; budgets look to the future. The theory is built on the application of classical sampling theory to the measurement and closure of a prospective budget. It satisfies R.A. Fisher's "vital requirement that the actual and physical conduct of experiments should govern the statistical procedure of their interpretation". It provides a test, which rejects, or fails to reject, the hypothesis that the closing error on the budget, when realised, was due to sampling error only. By increasing the number of measurements, the discrimination of the test can be improved, controlling both the precision and accuracy of the budget and its components. The cost-effective design of such measurement campaigns is discussed briefly. This analysis may also show when campaigns to close a budget on a particular space-filling body are not worth the effort for either scientific or economic reasons. Other approaches, such as those based on stochastic processes, lack this finality, because they fail to distinguish between different types of error in the mismatch between a set of realisations of the process and the measured data.

  1. Rotational quenching of H2O by He: mixed quantum/classical theory and comparison with quantum results.

    PubMed

    Ivanov, Mikhail; Dubernet, Marie-Lise; Babikov, Dmitri

    2014-04-07

    The mixed quantum/classical theory (MQCT) formulated in the space-fixed reference frame is used to compute quenching cross sections of several rotationally excited states of water molecule by impact of He atom in a broad range of collision energies, and is tested against the full-quantum calculations on the same potential energy surface. In current implementation of MQCT method, there are two major sources of errors: one affects results at energies below 10 cm(-1), while the other shows up at energies above 500 cm(-1). Namely, when the collision energy E is below the state-to-state transition energy ΔE the MQCT method becomes less accurate due to its intrinsic classical approximation, although employment of the average-velocity principle (scaling of collision energy in order to satisfy microscopic reversibility) helps dramatically. At higher energies, MQCT is expected to be accurate but in current implementation, in order to make calculations computationally affordable, we had to cut off the basis set size. This can be avoided by using a more efficient body-fixed formulation of MQCT. Overall, the errors of MQCT method are within 20% of the full-quantum results almost everywhere through four-orders-of-magnitude range of collision energies, except near resonances, where the errors are somewhat larger.

  2. Preparation and measurement of three-qubit entanglement in a superconducting circuit.

    PubMed

    Dicarlo, L; Reed, M D; Sun, L; Johnson, B R; Chow, J M; Gambetta, J M; Frunzio, L; Girvin, S M; Devoret, M H; Schoelkopf, R J

    2010-09-30

    Traditionally, quantum entanglement has been central to foundational discussions of quantum mechanics. The measurement of correlations between entangled particles can have results at odds with classical behaviour. These discrepancies grow exponentially with the number of entangled particles. With the ample experimental confirmation of quantum mechanical predictions, entanglement has evolved from a philosophical conundrum into a key resource for technologies such as quantum communication and computation. Although entanglement in superconducting circuits has been limited so far to two qubits, the extension of entanglement to three, eight and ten qubits has been achieved among spins, ions and photons, respectively. A key question for solid-state quantum information processing is whether an engineered system could display the multi-qubit entanglement necessary for quantum error correction, which starts with tripartite entanglement. Here, using a circuit quantum electrodynamics architecture, we demonstrate deterministic production of three-qubit Greenberger-Horne-Zeilinger (GHZ) states with fidelity of 88 per cent, measured with quantum state tomography. Several entanglement witnesses detect genuine three-qubit entanglement by violating biseparable bounds by 830 ± 80 per cent. We demonstrate the first step of basic quantum error correction, namely the encoding of a logical qubit into a manifold of GHZ-like states using a repetition code. The integration of this encoding with decoding and error-correcting steps in a feedback loop will be the next step for quantum computing with integrated circuits.

  3. Identification of natural frequencies and modal damping ratios of aerospace structures from response data

    NASA Technical Reports Server (NTRS)

    Michalopoulos, C. D.

    1976-01-01

    An analysis of one and multidegree of freedom systems with classical damping is presented. Definition and minimization of error functions for each system are discussed. Systems with classical and nonclassical normal modes are studied, and results for first order perturbation are given. An alternative method of matching power spectral densities is provided, and numerical results are reviewed.

  4. A Piece of Paper Falling Faster than Free Fall

    ERIC Educational Resources Information Center

    Vera, F.; Rivera, R.

    2011-01-01

    We report a simple experiment that clearly demonstrates a common error in the explanation of the classic experiment where a small piece of paper is put over a book and the system is let fall. This classic demonstration is used in introductory physics courses to show that after eliminating the friction force with the air, the piece of paper falls…

  5. Cycloplegic refraction is the gold standard for epidemiological studies.

    PubMed

    Morgan, Ian G; Iribarren, Rafael; Fotouhi, Akbar; Grzybowski, Andrzej

    2015-09-01

    Many studies on children have shown that lack of cycloplegia is associated with slight overestimation of myopia and marked errors in estimates of the prevalence of emmetropia and hyperopia. Non-cycloplegic refraction is particularly problematic for studies of associations with risk factors. The consensus around the importance of cycloplegia in children left undefined at what age, if any, cycloplegia became unnecessary. It was often implicitly assumed that cycloplegia is not necessary beyond childhood or early adulthood, and thus, the protocol for the classical studies of refraction in older adults did not include cycloplegia. Now that population studies of refractive error are beginning to fill the gap between schoolchildren and older adults, whether cycloplegia is required for measuring refractive error in this age range, needs to be defined. Data from the Tehran Eye Study show that, without cycloplegia, there are errors in the estimation of myopia, emmetropia and hyperopia in the age range 20-50, just as in children. Similar results have been reported in an analysis of data from the Beaver Dam Offspring Eye Study. If the only important outcome measure of a particular study is the prevalence of myopia, then cycloplegia may not be crucial in some cases. But, without cycloplegia, measurements of other refractive categories as well as spherical equivalent are unreliable. In summary, the current evidence suggests that cycloplegic refraction should be considered as the gold standard for epidemiological studies of refraction, not only in children, but in adults up to the age of 50. © 2015 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  6. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  7. Non-linear quantum-classical scheme to simulate non-equilibrium strongly correlated fermionic many-body dynamics

    PubMed Central

    Kreula, J. M.; Clark, S. R.; Jaksch, D.

    2016-01-01

    We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673

  8. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  9. Viète's Formula and an Error Bound without Taylor's Theorem

    ERIC Educational Resources Information Center

    Boucher, Chris

    2018-01-01

    This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.

  10. Three-step semiquantum secure direct communication protocol

    NASA Astrophysics Data System (ADS)

    Zou, XiangFu; Qiu, DaoWen

    2014-09-01

    Quantum secure direct communication is the direct communication of secret messages without need for establishing a shared secret key first. In the existing schemes, quantum secure direct communication is possible only when both parties are quantum. In this paper, we construct a three-step semiquantum secure direct communication (SQSDC) protocol based on single photon sources in which the sender Alice is classical. In a semiquantum protocol, a person is termed classical if he (she) can measure, prepare and send quantum states only with the fixed orthogonal quantum basis {|0>, |1>}. The security of the proposed SQSDC protocol is guaranteed by the complete robustness of semiquantum key distribution protocols and the unconditional security of classical one-time pad encryption. Therefore, the proposed SQSDC protocol is also completely robust. Complete robustness indicates that nonzero information acquired by an eavesdropper Eve on the secret message implies the nonzero probability that the legitimate participants can find errors on the bits tested by this protocol. In the proposed protocol, we suggest a method to check Eves disturbing in the doves returning phase such that Alice does not need to announce publicly any position or their coded bits value after the photons transmission is completed. Moreover, the proposed SQSDC protocol can be implemented with the existing techniques. Compared with many quantum secure direct communication protocols, the proposed SQSDC protocol has two merits: firstly the sender only needs classical capabilities; secondly to check Eves disturbing after the transmission of quantum states, no additional classical information is needed.

  11. Similarities in error processing establish a link between saccade prediction at baseline and adaptation performance.

    PubMed

    Wong, Aaron L; Shelhamer, Mark

    2014-05-01

    Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.

  12. Counteracting estimation bias and social influence to improve the wisdom of crowds.

    PubMed

    Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D

    2018-04-01

    Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).

  13. Development of a refractive error quality of life scale for Thai adults (the REQ-Thai).

    PubMed

    Sukhawarn, Roongthip; Wiratchai, Nonglak; Tatsanavivat, Pyatat; Pitiyanuwat, Somwung; Kanato, Manop; Srivannaboon, Sabong; Guyatt, Gordon H

    2011-08-01

    To develop a scale for measuring refractive error quality of life (QOL) for Thai adults. The full survey comprised 424 respondents from 5 medical centers in Bangkok and from 3 medical centers in Chiangmai, Songkla and KhonKaen provinces. Participants were emmetropes and persons with refractive correction with visual acuity of 20/30 or better An item reduction process was employed by combining 3 methods-expert opinion, impact method and item-total correlation methods. The classical reliability testing and the validity testing including convergent, discriminative and construct validity was performed. The developed questionnaire comprised 87 items in 6 dimensions: 1) quality of vision, 2) visual function, 3) social function, 4) psychological function, 5) symptoms and 6) refractive correction problems. It is the 5-level Likert scale type. The Cronbach's Alpha coefficients of its dimensions ranged from 0.756 to 0. 979. All validity testing were shown to be valid. The construct validity was validated by the confirmatory factor analysis. A short version questionnaire comprised 48 items with good reliability and validity was also developed. This is the first validated instrument for measuring refractive error quality of life for Thai adults that was developed with strong research methodology and large sample size.

  14. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  15. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  16. Statistical Orbit Determination using the Particle Filter for Incorporating Non-Gaussian Uncertainties

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

    2012-01-01

    The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

  17. Simultaneous classical communication and quantum key distribution using continuous variables*

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2016-10-01

    Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.

  18. Estimating the Error of an Analog Quantum Simulator by Additional Measurements

    NASA Astrophysics Data System (ADS)

    Schwenk, Iris; Zanker, Sebastian; Reiner, Jan-Michael; Leppäkangas, Juha; Marthaler, Michael

    2017-12-01

    We study an analog quantum simulator coupled to a reservoir with a known spectral density. The reservoir perturbs the quantum simulation by causing decoherence. The simulator is used to measure an operator average, which cannot be calculated using any classical means. Since we cannot predict the result, it is difficult to estimate the effect of the environment. Especially, it is difficult to resolve whether the perturbation is small or if the actual result of the simulation is in fact very different from the ideal system we intend to study. Here, we show that in specific systems a measurement of additional correlators can be used to verify the reliability of the quantum simulation. The procedure only requires additional measurements on the quantum simulator itself. We demonstrate the method theoretically in the case of a single spin connected to a bosonic environment.

  19. Coherent Rayleigh-Brillouin scattering measurements of bulk viscosity of polar and nonpolar gases, and kinetic theory.

    PubMed

    Meijer, A S; de Wijn, A S; Peters, M F E; Dam, N J; van de Water, W

    2010-10-28

    We investigate coherent Rayleigh-Brillouin spectroscopy as an efficient process to measure the bulk viscosity of gases at gigahertz frequencies. Scattered spectral distributions are measured using a Fizeau spectrometer. We discuss the statistical error due to the fluctuating mode structure of the used pump laser. Experiments were done for both polar and nonpolar gases and the bulk viscosity was obtained from the spectra using the Tenti S6 model. Results are compared to simple classical kinetic models of molecules with internal degrees of freedom. At the extremely high (gigahertz) frequencies of our experiment, most internal vibrational modes remain frozen and the bulk viscosity is dominated by the rotational degrees of freedom. Our measurements show that the molecular dipole moments have unexpectedly little influence on the bulk viscosity at room temperature and moderate pressure.

  20. Reconstruction of finite-valued sparse signals

    NASA Astrophysics Data System (ADS)

    Keiper, Sandra; Kutyniok, Gitta; Lee, Dae Gwan; Pfander, Götz

    2017-08-01

    The need of reconstructing discrete-valued sparse signals from few measurements, that is solving an undetermined system of linear equations, appears frequently in science and engineering. Those signals appear, for example, in error correcting codes as well as massive Multiple-Input Multiple-Output (MIMO) channel and wideband spectrum sensing. A particular example is given by wireless communications, where the transmitted signals are sequences of bits, i.e., with entries in f0; 1g. Whereas classical compressed sensing algorithms do not incorporate the additional knowledge of the discrete nature of the signal, classical lattice decoding approaches do not utilize sparsity constraints. In this talk, we present an approach that incorporates a discrete values prior into basis pursuit. In particular, we address finite-valued sparse signals, i.e., sparse signals with entries in a finite alphabet. We will introduce an equivalent null space characterization and show that phase transition takes place earlier than when using the classical basis pursuit approach. We will further discuss robustness of the algorithm and show that the nonnegative case is very different from the bipolar one. One of our findings is that the positioning of the zero in the alphabet - i.e., whether it is a boundary element or not - is crucial.

  1. Deviations from Vegard's law in semiconductor thin films measured with X-ray diffraction and Rutherford backscattering: The Ge1-ySny and Ge1-xSix cases

    NASA Astrophysics Data System (ADS)

    Xu, Chi; Senaratne, Charutha L.; Culbertson, Robert J.; Kouvetakis, John; Menéndez, José

    2017-09-01

    The compositional dependence of the lattice parameter in Ge1-ySny alloys has been determined from combined X-ray diffraction and Rutherford Backscattering (RBS) measurements of a large set of epitaxial films with compositions in the 0 < y < 0.14 range. In view of contradictory prior results, a critical analysis of this method has been carried out, with emphasis on nonlinear elasticity corrections and systematic errors in popular RBS simulation codes. The approach followed is validated by showing that measurements of Ge1-xSix films yield a bowing parameter θGeSi =-0.0253(30) Å, in excellent agreement with the classic work by Dismukes. When the same methodology is applied to Ge1-ySny alloy films, it is found that the bowing parameter θGeSn is zero within experimental error, so that the system follows Vegard's law. This is in qualitative agreement with ab initio theory, but the value of the experimental bowing parameter is significantly smaller than the theoretical prediction. Possible reasons for this discrepancy are discussed in detail.

  2. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  3. Accurate frequency domain measurement of the best linear time-invariant approximation of linear time-periodic systems including the quantification of the time-periodic distortions

    NASA Astrophysics Data System (ADS)

    Louarroudi, E.; Pintelon, R.; Lataire, J.

    2014-10-01

    Time-periodic (TP) phenomena occurring, for instance, in wind turbines, helicopters, anisotropic shaft-bearing systems, and cardiovascular/respiratory systems, are often not addressed when classical frequency response function (FRF) measurements are performed. As the traditional FRF concept is based on the linear time-invariant (LTI) system theory, it is only approximately valid for systems with varying dynamics. Accordingly, the quantification of any deviation from this ideal LTI framework is more than welcome. The “measure of deviation” allows us to define the notion of the best LTI (BLTI) approximation, which yields the best - in mean square sense - LTI description of a linear time-periodic LTP system. By taking into consideration the TP effects, it is shown in this paper that the variability of the BLTI measurement can be reduced significantly compared with that of classical FRF estimators. From a single experiment, the proposed identification methods can handle (non-)linear time-periodic [(N)LTP] systems in open-loop with a quantification of (i) the noise and/or the NL distortions, (ii) the TP distortions and (iii) the transient (leakage) errors. Besides, a geometrical interpretation of the BLTI approximation is provided, leading to a framework called vector FRF analysis. The theory presented is supported by numerical simulations as well as real measurements mimicking the well-known mechanical Mathieu oscillator.

  4. Limitations of shallow nets approximation.

    PubMed

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. An optimized BP neural network based on genetic algorithm for static decoupling of a six-axis force/torque sensor

    NASA Astrophysics Data System (ADS)

    Fu, Liyue; Song, Aiguo

    2018-02-01

    In order to improve the measurement precision of 6-axis force/torque sensor for robot, BP decoupling algorithm optimized by GA (GA-BP algorithm) is proposed in this paper. The weights and thresholds of a BP neural network with 6-10-6 topology are optimized by GA to develop decouple a six-axis force/torque sensor. By comparison with other traditional decoupling algorithm, calculating the pseudo-inverse matrix of calibration and classical BP algorithm, the decoupling results validate the good decoupling performance of GA-BP algorithm and the coupling errors are reduced.

  6. Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels

    NASA Astrophysics Data System (ADS)

    Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis

    2013-01-01

    We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.

  7. Interpolating moving least-squares methods for fitting potential energy surfaces: using classical trajectories to explore configuration space.

    PubMed

    Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L

    2009-04-14

    We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.

  8. Assessment of the Derivative-Moment Transformation method for unsteady-load estimation

    NASA Astrophysics Data System (ADS)

    Mohebbian, Ali; Rival, David

    2011-11-01

    It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces instead. However, measuring the acceleration term within the volume of interest using PIV can be limited by optical access, reflections as well as shadows. Therefore in this study an alternative approach, termed the Derivative-Moment Transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency was found to be the determination of pressure in the wake. The effect of control-volume size was investigated suggesting that smaller domains work best by minimizing the associated error with the pressure field. When increasing the control-volume size, the number of calculations necessary for the pressure-gradient integration increases, in turn substantially increasing the error propagation.

  9. Functional Basis for Efficient Physical Layer Classical Control in Quantum Processors

    NASA Astrophysics Data System (ADS)

    Ball, Harrison; Nguyen, Trung; Leong, Philip H. W.; Biercuk, Michael J.

    2016-12-01

    The rapid progress seen in the development of quantum-coherent devices for information processing has motivated serious consideration of quantum computer architecture and organization. One topic which remains open for investigation and optimization relates to the design of the classical-quantum interface, where control operations on individual qubits are applied according to higher-level algorithms; accommodating competing demands on performance and scalability remains a major outstanding challenge. In this work, we present a resource-efficient, scalable framework for the implementation of embedded physical layer classical controllers for quantum-information systems. Design drivers and key functionalities are introduced, leading to the selection of Walsh functions as an effective functional basis for both programing and controller hardware implementation. This approach leverages the simplicity of real-time Walsh-function generation in classical digital hardware, and the fact that a wide variety of physical layer controls, such as dynamic error suppression, are known to fall within the Walsh family. We experimentally implement a real-time field-programmable-gate-array-based Walsh controller producing Walsh timing signals and Walsh-synthesized analog waveforms appropriate for critical tasks in error-resistant quantum control and noise characterization. These demonstrations represent the first step towards a unified framework for the realization of physical layer controls compatible with large-scale quantum-information processing.

  10. Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Tan, Vincent Y. F.

    2015-08-01

    We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.

  11. Multi-frequency bioelectrical impedance: a comparison between the Cole-Cole modelling and Hanai equations with the classical impedance index approach.

    PubMed

    Deurenberg, P; Andreoli, A; de Lorenzo, A

    1996-01-01

    Total body water and extracellular water were measured by deuterium oxide and bromide dilution respectively in 23 healthy males and 25 healthy females. In addition, total body impedance was measured at 17 frequencies, ranging from 1 kHz to 1350 kHz. Modelling programs were used to extrapolate impedance values to frequency zero (extracellular resistance) and frequency infinity (total body water resistance). Impedance indexes (height2/Zf) were computed at all 17 frequencies. The estimation errors of extracellular resistance and total body water resistance were 1% and 3%, respectively. Impedance and impedance index at low frequency were correlated with extracellular water, independent of the amount of total body water. Total body water showed the greatest correlation with impedance and impedance index at high frequencies. Extrapolated impedance values did not show a higher correlation compared to measured values. Prediction formulas from the literature applied to fixed frequencies showed the best mean and individual predictions for both extracellular water and total body water. It is concluded that, at least in healthy individuals with normal body water distribution, modelling impedance data has no advantage over impedance values measured at fixed frequencies, probably due to estimation errors in the modelled data.

  12. Design of a lightweight, tethered, torque-controlled knee exoskeleton.

    PubMed

    Witte, Kirby Ann; Fatschel, Andreas M; Collins, Steven H

    2017-07-01

    Lower-limb exoskeletons show promise for improving gait rehabilitation for those with chronic gait abnormalities due to injury, stroke or other illness. We designed and built a tethered knee exoskeleton with a strong lightweight frame and comfortable, four-point contact with the leg. The device is structurally compliant in select directions, instrumented to measure joint angle and applied torque, and is lightweight (0.76 kg). The exoskeleton is actuated by two off-board motors. Closed loop torque control is achieved using classical proportional feedback control with damping injection in conjunction with iterative learning. We tested torque measurement accuracy and found root mean squared (RMS) error of 0.8 Nm with a max load of 62.2 Nm. Bandwidth was measured to be phase limited at 45 Hz when tested on a rigid test stand and 23 Hz when tested on a person's leg. During bandwidth tests peak extension torques were measured up to 50 Nm. Torque tracking was tested during walking on a treadmill at 1.25 m/s with peak flexion torques of 30 Nm. RMS torque tracking error averaged over a hundred steps was 0.91 Nm. We intend to use this knee exoskeleton to investigate robotic assistance strategies to improve gait rehabilitation and enhance human athletic ability.

  13. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  14. Item Response Theory and Health Outcomes Measurement in the 21st Century

    PubMed Central

    Hays, Ron D.; Morales, Leo S.; Reise, Steve P.

    2006-01-01

    Item response theory (IRT) has a number of potential advantages over classical test theory in assessing self-reported health outcomes. IRT models yield invariant item and latent trait estimates (within a linear transformation), standard errors conditional on trait level, and trait estimates anchored to item content. IRT also facilitates evaluation of differential item functioning, inclusion of items with different response formats in the same scale, and assessment of person fit and is ideally suited for implementing computer adaptive testing. Finally, IRT methods can be helpful in developing better health outcome measures and in assessing change over time. These issues are reviewed, along with a discussion of some of the methodological and practical challenges in applying IRT methods. PMID:10982088

  15. Calibrating ion density profile measurements in ion thruster beam plasma

    NASA Astrophysics Data System (ADS)

    Zhang, Zun; Tang, Haibin; Ren, Junxue; Zhang, Zhe; Wang, Joseph

    2016-11-01

    The ion thruster beam plasma is characterized by high directed ion velocity (104 m/s) and low plasma density (1015 m-3). Interpretation of measurements of such a plasma based on classical Langmuir probe theory can yield a large experimental error. This paper presents an indirect method to calibrate ion density determination in an ion thruster beam plasma using a Faraday probe, a retarding potential analyzer, and a Langmuir probe. This new method is applied to determine the plasma emitted from a 20-cm-diameter Kaufman ion thruster. The results show that the ion density calibrated by the new method can be as much as 40% less than that without any ion current density and ion velocity calibration.

  16. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  17. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  18. A dark mode in scanning thermal microscopy

    NASA Astrophysics Data System (ADS)

    Ramiandrisoa, Liana; Allard, Alexandre; Joumani, Youssef; Hay, Bruno; Gomés, Séverine

    2017-12-01

    The need for high lateral spatial resolution in thermal science using Scanning Thermal Microscopy (SThM) has pushed researchers to look for more and more tiny probes. SThM probes have consequently become more and more sensitive to the size effects that occur within the probe, the sample, and their interaction. Reducing the tip furthermore induces very small heat flux exchanged between the probe and the sample. The measurement of this flux, which is exploited to characterize the sample thermal properties, requires then an accurate thermal management of the probe-sample system and to reduce any phenomenon parasitic to this system. Classical experimental methodologies must then be constantly questioned to hope for relevant and interpretable results. In this paper, we demonstrate and estimate the influence of the laser of the optical force detection system used in the common SThM setup that is based on atomic-force microscopy equipment on SThM measurements. We highlight the bias induced by the overheating due to the laser illumination on the measurements performed by thermoresistive probes (palladium probe from Kelvin Nanotechnology). To face this issue, we propose a new experimental procedure based on a metrological approach of the measurement: a SThM "dark mode." The comparison with the classical procedure using the laser shows that errors between 14% and 37% can be reached on the experimental data exploited to determine the heat flux transferred from the hot probe to the sample.

  19. The Quantum Socket: Wiring for Superconducting Qubits - Part 1

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Bejanin, J. H.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Mariantoni, M.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    Quantum systems with ten superconducting quantum bits (qubits) have been realized, making it possible to show basic quantum error correction (QEC) algorithms. However, a truly scalable architecture has not been developed yet. QEC requires a two-dimensional array of qubits, restricting any interconnection to external classical systems to the third axis. In this talk, we introduce an interconnect solution for solid-state qubits: The quantum socket. The quantum socket employs three-dimensional wires and makes it possible to connect classical electronics with quantum circuits more densely and accurately than methods based on wire bonding. The three-dimensional wires are based on spring-loaded pins engineered to insure compatibility with quantum computing applications. Extensive design work and machining was required, with focus on material quality to prevent magnetic impurities. Microwave simulations were undertaken to optimize the design, focusing on the interface between the micro-connector and an on-chip coplanar waveguide pad. Simulations revealed good performance from DC to 10 GHz and were later confirmed against experimental measurements.

  20. A quantum theory account of order effects and conjunction fallacies in political judgments.

    PubMed

    Yearsley, James M; Trueblood, Jennifer S

    2017-09-06

    Are our everyday judgments about the world around us normative? Decades of research in the judgment and decision-making literature suggest the answer is no. If people's judgments do not follow normative rules, then what rules if any do they follow? Quantum probability theory is a promising new approach to modeling human behavior that is at odds with normative, classical rules. One key advantage of using quantum theory is that it explains multiple types of judgment errors using the same basic machinery, unifying what have previously been thought of as disparate phenomena. In this article, we test predictions from quantum theory related to the co-occurrence of two classic judgment phenomena, order effects and conjunction fallacies, using judgments about real-world events (related to the U.S. presidential primaries). We also show that our data obeys two a priori and parameter free constraints derived from quantum theory. Further, we examine two factors that moderate the effects, cognitive thinking style (as measured by the Cognitive Reflection Test) and political ideology.

  1. Statistical properties of four effect-size measures for mediation models.

    PubMed

    Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C

    2018-02-01

    This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.

  2. Measurement Issues in Health Disparities Research

    PubMed Central

    Ramírez, Mildred; Ford, Marvella E; Stewart, Anita L; A Teresi, Jeanne

    2005-01-01

    Background Racial and ethnic disparities in health and health care have been documented; the elimination of such disparities is currently part of a national agenda. In order to meet this national objective, it is necessary that measures identify accurately the true prevalence of the construct of interest across diverse groups. Measurement error might lead to biased results, e.g., estimates of prevalence, magnitude of risks, and differences in mean scores. Addressing measurement issues in the assessment of health status may contribute to a better understanding of health issues in cross-cultural research. Objective To provide a brief overview of issues regarding measurement in diverse populations. Findings Approaches used to assess the magnitude and nature of bias in measures when applied to diverse groups include qualitative analyses, classic psychometric studies, as well as more modern psychometric methods. These approaches should be applied sequentially, and/or iteratively during the development of measures. Conclusions Investigators performing comparative studies face the challenge of addressing measurement equivalence, crucial for obtaining accurate results in cross-cultural comparisons. PMID:16179000

  3. A Hierarchical Modulation Coherent Communication Scheme for Simultaneous Four-State Continuous-Variable Quantum Key Distribution and Classical Communication

    NASA Astrophysics Data System (ADS)

    Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang

    2018-06-01

    We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.

  4. Characterizing quantum channels with non-separable states of classical light

    NASA Astrophysics Data System (ADS)

    Ndagano, Bienvenu; Perez-Garcia, Benjamin; Roux, Filippus S.; McLaren, Melanie; Rosales-Guzman, Carmelo; Zhang, Yingwen; Mouane, Othmane; Hernandez-Aranda, Raul I.; Konrad, Thomas; Forbes, Andrew

    2017-04-01

    High-dimensional entanglement with spatial modes of light promises increased security and information capacity over quantum channels. Unfortunately, entanglement decays due to perturbations, corrupting quantum links that cannot be repaired without performing quantum tomography on the channel. Paradoxically, the channel tomography itself is not possible without a working link. Here we overcome this problem with a robust approach to characterize quantum channels by means of classical light. Using free-space communication in a turbulent atmosphere as an example, we show that the state evolution of classically entangled degrees of freedom is equivalent to that of quantum entangled photons, thus providing new physical insights into the notion of classical entanglement. The analysis of quantum channels by means of classical light in real time unravels stochastic dynamics in terms of pure state trajectories, and thus enables precise quantum error correction in short- and long-haul optical communication, in both free space and fibre.

  5. Simultaneous classical communication and quantum key distribution using continuous variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Bing

    Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less

  6. Simultaneous classical communication and quantum key distribution using continuous variables

    DOE PAGES

    Qi, Bing

    2016-10-26

    Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less

  7. Classical non-homologous end-joining pathway utilizes nascent RNA for error-free double-strand break repair of transcribed genes

    PubMed Central

    Chakraborty, Anirban; Tapryal, Nisha; Venkova, Tatiana; Horikoshi, Nobuo; Pandita, Raj K.; Sarker, Altaf H.; Sarkar, Partha S.; Pandita, Tej K.; Hazra, Tapas K.

    2016-01-01

    DNA double-strand breaks (DSBs) leading to loss of nucleotides in the transcribed region can be lethal. Classical non-homologous end-joining (C-NHEJ) is the dominant pathway for DSB repair (DSBR) in adult mammalian cells. Here we report that during such DSBR, mammalian C-NHEJ proteins form a multiprotein complex with RNA polymerase II and preferentially associate with the transcribed genes after DSB induction. Depletion of C-NHEJ factors significantly abrogates DSBR in transcribed but not in non-transcribed genes. We hypothesized that nascent RNA can serve as a template for restoring the missing sequences, thus allowing error-free DSBR. We indeed found pre-mRNA in the C-NHEJ complex. Finally, when a DSB-containing plasmid with several nucleotides deleted within the E. coli lacZ gene was allowed time to repair in lacZ-expressing mammalian cells, a functional lacZ plasmid could be recovered from control but not C-NHEJ factor-depleted cells, providing important mechanistic insights into C-NHEJ-mediated error-free DSBR of the transcribed genome. PMID:27703167

  8. Inflation of the type I error: investigations on regulatory recommendations for bioequivalence of highly variable drugs.

    PubMed

    Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin

    2015-01-01

    We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.

  9. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE PAGES

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...

    2017-02-15

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  10. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    PubMed Central

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter

    2017-01-01

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466

  11. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  12. Quantum biological channel modeling and capacity calculation.

    PubMed

    Djordjevic, Ivan B

    2012-12-10

    Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general.

  13. Theta EEG dynamics of the error-related negativity.

    PubMed

    Trujillo, Logan T; Allen, John J B

    2007-03-01

    The error-related negativity (ERN) is a response-locked brain potential (ERP) occurring 80-100ms following response errors. This report contrasts three views of the genesis of the ERN, testing the classic view that time-locked phasic bursts give rise to the ERN against the view that the ERN arises from a pure phase-resetting of ongoing theta (4-7Hz) EEG activity and the view that the ERN is generated - at least in part - by a phase-resetting and amplitude enhancement of ongoing theta EEG activity. Time-domain ERP analyses were augmented with time-frequency investigations of phase-locked and non-phase-locked spectral power, and inter-trial phase coherence (ITPC) computed from individual EEG trials, examining time courses and scalp topographies. Simulations based on the assumptions of the classic, pure phase-resetting, and phase-resetting plus enhancement views, using parameters from each subject's empirical data, were used to contrast the time-frequency findings that could be expected if one or more of these hypotheses adequately modeled the data. Error responses produced larger amplitude activity than correct responses in time-domain ERPs immediately following responses, as expected. Time-frequency analyses revealed that significant error-related post-response increases in total spectral power (phase- and non-phase-locked), phase-locked power, and ITPC were primarily restricted to the theta range, with this effect located over midfrontocentral sites, with a temporal distribution from approximately 150-200ms prior to the button press and persisting up to 400ms post-button press. The increase in non-phase-locked power (total power minus phase-locked power) was larger than phase-locked power, indicating that the bulk of the theta event-related dynamics were not phase-locked to response. Results of the simulations revealed a good fit for data simulated according to the phase-locking with amplitude enhancement perspective, and a poor fit for data simulated according to the classic view and the pure phase-resetting view. Error responses produce not only phase-locked increases in theta EEG activity, but also increases in non-phase-locked theta, both of which share a similar topography. The findings are thus consistent with the notion advanced by Luu et al. [Luu P, Tucker DM, Makeig S. Frontal midline theta and the error-related negativity; neurophysiological mechanisms of action regulation. Clin Neurophysiol 2004;115:1821-35] that the ERN emerges, at least in part, from a phase-resetting and phase-locking of ongoing theta-band activity, in the context of a general increase in theta power following errors.

  14. Accounting for unknown foster dams in the genetic evaluation of embryo transfer progeny.

    PubMed

    Suárez, M J; Munilla, S; Cantet, R J C

    2015-02-01

    Animals born by embryo transfer (ET) are usually not included in the genetic evaluation of beef cattle for preweaning growth if the recipient dam is unknown. This is primarily to avoid potential bias in the estimation of the unknown age of dam. We present a method that allows including records of calves with unknown age of dam. Assumptions are as follows: (i) foster cows belong to the same breed being evaluated, (ii) there is no correlation between the breeding value (BV) of the calf and the maternal BV of the recipient cow, and (iii) cows of all ages are used as recipients. We examine the issue of bias for the fixed level of unknown age of dam (AOD) and propose an estimator of the effect based on classical measurement error theory (MEM) and a Bayesian approach. Using stochastic simulation under random mating or selection, the MEM estimating equations were compared with BLUP in two situations as follows: (i) full information (FI); (ii) missing AOD information on some dams. Predictions of breeding value (PBV) from the FI situation had the smallest empirical average bias followed by PBV obtained without taking measurement error into account. In turn, MEM displayed the highest bias, although the differences were small. On the other hand, MEM showed the smallest MSEP, for either random mating or selection, followed by FI, whereas ignoring measurement error produced the largest MSEP. As a consequence from the smallest MSEP with a relatively small bias, empirical accuracies of PBV were larger for MEM than those for full information, which in turn showed larger accuracies than the situation ignoring measurement error. It is concluded that MEM equations are a useful alternative for analysing weaning weight data when recipient cows are unknown, as it mitigates the effects of bias in AOD by decreasing MSEP. © 2014 Blackwell Verlag GmbH.

  15. Executive functioning in schizophrenia: Unique and shared variance with measures of fluid intelligence.

    PubMed

    Martin, A K; Mowry, B; Reutens, D; Robinson, G A

    2015-10-01

    Patients with schizophrenia often display deficits on tasks thought to measure "executive" processes. Recently, it has been suggested that reductions in fluid intelligence test performance entirely explain deficits reported for patients with focal frontal lesions on classical executive tasks. For patients with schizophrenia, it is unclear whether deficits on executive tasks are entirely accountable by fluid intelligence and representative of a common general process or best accounted for by distinct contributions to the cognitive profile of schizophrenia. In the current study, 50 patients with schizophrenia and 50 age, sex and premorbid intelligence matched controls were assessed using a broad neuropsychological battery, including tasks considered sensitive to executive abilities, namely the Hayling Sentence Completion Test (HSCT), word fluency, Stroop test, digit-span backwards, and spatial working memory. Fluid intelligence was measured using both the Matrix reasoning subtest from the Weschler Abbreviated Scale of Intelligence (WASI) and a composite score derived from a number of cognitive tests. Patients with schizophrenia were impaired on all cognitive measures compared with controls, except smell identification and the optimal betting and risk-taking measures from the Cambridge Gambling Task. After introducing fluid intelligence as a covariate, significant differences remained for HSCT suppression errors, and classical executive function tests such as the Stroop test and semantic/phonemic word fluency, regardless of which fluid intelligence measure was included. Fluid intelligence does not entirely explain impaired performance on all tests considered as reflecting "executive" processes. For schizophrenia, these measures should remain part of a comprehensive neuropsychological assessment alongside a measure of fluid intelligence. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. The effect of signal to noise ratio on accuracy of temperature measurements for Brillouin lidar in water

    NASA Astrophysics Data System (ADS)

    Liang, Kun; Niu, Qunjie; Wu, Xiangkui; Xu, Jiaqi; Peng, Li; Zhou, Bo

    2017-09-01

    A lidar system with Fabry-Pérot etalon and an intensified charge coupled device can be used to obtain the scattering spectrum of the ocean and retrieve oceanic temperature profiles. However, the spectrum would be polluted by noise and result in a measurement error. To analyze the effect of signal to noise ratio (SNR) on the accuracy of measurements for Brillouin lidar in water, the theory model and characteristics of SNR are researched. The noise spectrums with different SNR are repetitiously measured based on simulation and experiment. The results show that accuracy is related to SNR, and considering the balance of time consumption and quality, the average of five measurements is adapted for real remote sensing under the pulse laser conditions of wavelength 532 nm, pulse energy 650 mJ, repetition rate 10 Hz, pulse width 8 ns and linewidth 0.003 cm-1 (90 MHz). Measuring with the Brillouin linewidth has a better accuracy at a lower temperature (<15 °C), while measuring with the Brillouin shift is a more appropriate method at a higher temperature (>15 °C), based on the classical retrieval model we adopt. The experimental results show that the temperature error is 0.71 °C and 0.06 °C based on shift and linewidth respectively when the image SNR is at the range of 3.2 dB-3.9 dB.

  17. Static properties of hydrostatic thrust gas bearings with curved surfaces.

    NASA Technical Reports Server (NTRS)

    Rehsteiner, F. H.; Cannon, R. H., Jr.

    1971-01-01

    The classical treatment of circular, hydrostatic, orifice-regulated thrust gas bearings, in which perfectly plane bearing plates are assumed, is extended to include axisymmetric, but otherwise arbitrary, plate profiles. Plate curvature has a strong influence on bearing load capability, static stiffness, tilting stiffness, and side force per unit misalignment angle. By a suitable combination of gas inlet impedance and concave plate profile, the static stiffness can be made almost constant over a wide load range, and to remain positive at the closure load. Extensive measurements performed with convex and concave plates agree with theory to within the experimental error throughout and demonstrate the practical feasibility of using curved plates.

  18. Preliminary development of digital signal processing in microwave radiometers

    NASA Technical Reports Server (NTRS)

    Stanley, W. D.

    1980-01-01

    Topics covered involve a number of closely related tasks including: the development of several control loop and dynamic noise model computer programs for simulating microwave radiometer measurements; computer modeling of an existing stepped frequency radiometer in an effort to determine its optimum operational characteristics; investigation of the classical second order analog control loop to determine its ability to reduce the estimation error in a microwave radiometer; investigation of several digital signal processing unit designs; initiation of efforts to develop required hardware and software for implementation of the digital signal processing unit; and investigation of the general characteristics and peculiarities of digital processing noiselike microwave radiometer signals.

  19. New approach for identifying the zero-order fringe in variable wavelength interferometry

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Litwin, Dariusz; Daszkiewicz, Marek

    2016-12-01

    The family of VAWI techniques (for transmitted and reflected light) is especially efficient for characterizing objects, when in the interference system the optical path difference exceeds a few wavelengths. The classical approach that consists in measuring the deflection of interference fringes fails because of strong edge effects. Broken continuity of interference fringes prevents from correct identification of the zero order fringe, which leads to significant errors. The family of these methods has been proposed originally by Professor Pluta in the 1980s but that time image processing facilities and computers were hardly available. Automated devices unfold a completely new approach to the classical measurement procedures. The Institute team has taken that new opportunity and transformed the technique into fully automated measurement devices offering commercial readiness of industry-grade quality. The method itself has been modified and new solutions and algorithms simultaneously have extended the field of application. This has concerned both construction aspects of the systems and software development in context of creating computerized instruments. The VAWI collection of instruments constitutes now the core of the Institute commercial offer. It is now practically applicable in industrial environment for measuring textile and optical fibers, strips of thin films, testing of wave plates and nonlinear affects in different materials. This paper describes new algorithms for identifying the zero order fringe, which increases the performance of the system as a whole and presents some examples of measurements of optical elements.

  20. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  1. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  2. Low aerial imagery - an assessment of georeferencing errors and the potential for use in environmental inventory

    NASA Astrophysics Data System (ADS)

    Smaczyński, Maciej; Medyńska-Gulij, Beata

    2017-06-01

    Unmanned aerial vehicles are increasingly being used in close range photogrammetry. Real-time observation of the Earth's surface and the photogrammetric images obtained are used as material for surveying and environmental inventory. The following study was conducted on a small area (approximately 1 ha). In such cases, the classical method of topographic mapping is not accurate enough. The geodetic method of topographic surveying, on the other hand, is an overly precise measurement technique for the purpose of inventorying the natural environment components. The author of the following study has proposed using the unmanned aerial vehicle technology and tying in the obtained images to the control point network established with the aid of GNSS technology. Georeferencing the acquired images and using them to create a photogrammetric model of the studied area enabled the researcher to perform calculations, which yielded a total root mean square error below 9 cm. The performed comparison of the real lengths of the vectors connecting the control points and their lengths calculated on the basis of the photogrammetric model made it possible to fully confirm the RMSE calculated and prove the usefulness of the UAV technology in observing terrain components for the purpose of environmental inventory. Such environmental components include, among others, elements of road infrastructure, green areas, but also changes in the location of moving pedestrians and vehicles, as well as other changes in the natural environment that are not registered on classical base maps or topographic maps.

  3. Experimental determination of single CdSe nanowire absorption cross sections through photothermal imaging.

    PubMed

    Giblin, Jay; Syed, Muhammad; Banning, Michael T; Kuno, Masaru; Hartland, Greg

    2010-01-26

    Absorption cross sections ((sigma)abs) of single branched CdSe nanowires (NWs) have been measured by photothermal heterodyne imaging (PHI). Specifically, PHI signals from isolated gold nanoparticles (NPs) with known cross sections were compared to those of individual CdSe NWs excited at 532 nm. This allowed us to determine average NW absorption cross sections at 532 nm of (sigma)abs = (3.17 +/- 0.44) x 10(-11) cm2/microm (standard error reported). This agrees well with a theoretical value obtained using a classical electromagnetic analysis ((sigma)abs = 5.00 x 10(-11) cm2/microm) and also with prior ensemble estimates. Furthermore, NWs exhibit significant absorption polarization sensitivities consistent with prior NW excitation polarization anisotropy measurements. This has enabled additional estimates of the absorption cross section parallel ((sigma)abs) and perpendicular ((sigma)abs(perpendicular) to the NW growth axis, as well as the corresponding NW absorption anisotropy ((rho)abs). Resulting values of (sigma)abs = (5.6 +/- 1.1) x 10(-11) cm2/microm, (sigma)abs(perpendicular) = (1.26 +/- 0.21) x 10(-11) cm2/microm, and (rho)abs = 0.63+/- 0.04 (standard errors reported) are again in good agreement with theoretical predictions. These measurements all indicate sizable NW absorption cross sections and ultimately suggest the possibility of future direct single NW absorption studies.

  4. L'effet Doppler et le décalage vers le rouge en mécanique rationnelle: applications et verifications experimentales.

    PubMed

    Loiseaus, J

    1968-07-01

    Shifts z and z' toward the red of the galaxy NGC 5668 for a beam of 21 cm, z measured in radioastronomy with a frequency meter and z' measured in optics with a spectrograph, not being equal, it follows that the speed of light from a galaxy c ' is not equal to that of a galaxy c which is measured on earth from stationary source. The Doppler empirical formula cannot be explained in classical mechanics since it is in contradiction with it. As for the theory of relativity c ' = c from a postulate and z' = z. If we consider the universe represented on a three-dimensional space (H), non-Euclidian, with Euclidian connection plunged in a Riemannine four-dimension space (E), a certain universal time, like that of an astronomer, can be defined and its course calculated in relation to this time: it will necessarily be confounded with the atomic clock time, but c ' not equal c and z' not equal z: the Doppler formula is not accurate. However, c ' and c as well as z' and z are so close in all the experiments carried out on earth, even when an artificial satellite is used, that the errors made in using the Doppler formula are clearly inferior to experimental errors.

  5. Measurement of process variables in solid-state fermentation of wheat straw using FT-NIR spectroscopy and synergy interval PLS algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Hui; Liu, Guohai; Mei, Congli; Yu, Shuang; Xiao, Xiahong; Ding, Yuhan

    2012-11-01

    The feasibility of rapid determination of the process variables (i.e. pH and moisture content) in solid-state fermentation (SSF) of wheat straw using Fourier transform near infrared (FT-NIR) spectroscopy was studied. Synergy interval partial least squares (siPLS) algorithm was implemented to calibrate regression model. The number of PLS factors and the number of subintervals were optimized simultaneously by cross-validation. The performance of the prediction model was evaluated according to the root mean square error of cross-validation (RMSECV), the root mean square error of prediction (RMSEP) and the correlation coefficient (R). The measurement results of the optimal model were obtained as follows: RMSECV = 0.0776, Rc = 0.9777, RMSEP = 0.0963, and Rp = 0.9686 for pH model; RMSECV = 1.3544% w/w, Rc = 0.8871, RMSEP = 1.4946% w/w, and Rp = 0.8684 for moisture content model. Finally, compared with classic PLS and iPLS models, the siPLS model revealed its superior performance. The overall results demonstrate that FT-NIR spectroscopy combined with siPLS algorithm can be used to measure process variables in solid-state fermentation of wheat straw, and NIR spectroscopy technique has a potential to be utilized in SSF industry.

  6. Legitimate Techniques for Improving the R-Square and Related Statistics of a Multiple Regression Model

    DTIC Science & Technology

    1981-01-01

    explanatory variable has been ommitted. Ramsey (1974) has developed a rather interesting test for detecting specification errors using estimates of the...Peter. (1979) A Guide to Econometrics , Cambridge, MA: The MIT Press. Ramsey , J.B. (1974), "Classical Model Selection Through Specification Error... Tests ," in P. Zarembka, Ed. Frontiers in Econometrics , New York: Academia Press. Theil, Henri. (1971), Principles of Econometrics , New York: John Wiley

  7. Heritability of refractive error and ocular biometrics: the Genes in Myopia (GEM) twin study.

    PubMed

    Dirani, Mohamed; Chamberlain, Matthew; Shekar, Sri N; Islam, Amirul F M; Garoufalis, Pam; Chen, Christine Y; Guymer, Robyn H; Baird, Paul N

    2006-11-01

    A classic twin study was undertaken to assess the contribution of genes and environment to the development of refractive errors and ocular biometrics in a twin population. A total of 1224 twins (345 monozygotic [MZ] and 267 dizygotic [DZ] twin pairs) aged between 18 and 88 years were examined. All twins completed a questionnaire consisting of a medical history, education, and zygosity. Objective refraction was measured in all twins, and biometric measurements were obtained using partial coherence interferometry. Intrapair correlations for spherical equivalent and ocular biometrics were significantly higher in the MZ than in the DZ twin pairs (P < 0.05), when refraction was considered as a continuous variable. A significant gender difference in the variation of spherical equivalent and ocular biometrics was found (P < 0.05). A genetic model specifying an additive, dominant, and unique environmental factor that was sex limited was the best fit for all measured variables. Heritability of spherical equivalents of 88% and 75% were found in the men and women, respectively, whereas, that of axial length was 94% and 92%, respectively. Additive genetic effects accounted for a greater proportion of the variance in spherical equivalent, whereas the variance in ocular biometrics, particularly axial length was explained mostly by dominant genetic effects. Genetic factors, both additive and dominant, play a significant role in refractive error (myopia and hypermetropia) as well as in ocular biometrics, particularly axial length. The sex limitation ADE model (additive genetic, nonadditive genetic, and environmental components) provided the best-fit genetic model for all parameters.

  8. Characterizing quantum supremacy in near-term devices

    NASA Astrophysics Data System (ADS)

    Boixo, Sergio; Isakov, Sergei V.; Smelyanskiy, Vadim N.; Babbush, Ryan; Ding, Nan; Jiang, Zhang; Bremner, Michael J.; Martinis, John M.; Neven, Hartmut

    2018-06-01

    A critical question for quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of supercomputers. Such a demonstration of what is referred to as quantum supremacy requires a reliable evaluation of the resources required to solve tasks with classical approaches. Here, we propose the task of sampling from the output distribution of random quantum circuits as a demonstration of quantum supremacy. We extend previous results in computational complexity to argue that this sampling task must take exponential time in a classical computer. We introduce cross-entropy benchmarking to obtain the experimental fidelity of complex multiqubit dynamics. This can be estimated and extrapolated to give a success metric for a quantum supremacy demonstration. We study the computational cost of relevant classical algorithms and conclude that quantum supremacy can be achieved with circuits in a two-dimensional lattice of 7 × 7 qubits and around 40 clock cycles. This requires an error rate of around 0.5% for two-qubit gates (0.05% for one-qubit gates), and it would demonstrate the basic building blocks for a fault-tolerant quantum computer.

  9. Quantum fingerprinting with coherent states and a constant mean number of photons

    NASA Astrophysics Data System (ADS)

    Arrazola, Juan Miguel; Lütkenhaus, Norbert

    2014-06-01

    We present a protocol for quantum fingerprinting that is ready to be implemented with current technology and is robust to experimental errors. The basis of our scheme is an implementation of the signal states in terms of a coherent state in a superposition of time-bin modes. Experimentally, this requires only the ability to prepare coherent states of low amplitude and to interfere them in a balanced beam splitter. The states used in the protocol are arbitrarily close in trace distance to states of O (log2n) qubits, thus exhibiting an exponential separation in abstract communication complexity compared to the classical case. The protocol uses a number of optical modes that is proportional to the size n of the input bit strings but a total mean photon number that is constant and independent of n. Given the expended resources, our protocol achieves a task that is provably impossible using classical communication only. In fact, even in the presence of realistic experimental errors and loss, we show that there exist a large range of input sizes for which our quantum protocol transmits an amount of information that can be more than two orders of magnitude smaller than a classical fingerprinting protocol.

  10. Reversibility and stability of information processing systems

    NASA Technical Reports Server (NTRS)

    Zurek, W. H.

    1984-01-01

    Classical and quantum models of dynamically reversible computers are considered. Instabilities in the evolution of the classical 'billiard ball computer' are analyzed and shown to result in a one-bit increase of entropy per step of computation. 'Quantum spin computers', on the other hand, are not only microscopically, but also operationally reversible. Readoff of the output of quantum computation is shown not to interfere with this reversibility. Dissipation, while avoidable in principle, can be used in practice along with redundancy to prevent errors.

  11. Photon losses depending on polarization mixedness

    NASA Astrophysics Data System (ADS)

    Memarzadeh, L.; Mancini, S.

    2010-01-01

    We introduce a quantum channel describing photon losses depending on the degree of polarization mixedness. This can be regarded as a model of quantum channel with correlated errors between discrete and continuous degrees of freedom. We consider classical information over a continuous alphabet encoded on weak coherent states as well as classical information over a discrete alphabet encoded on single photons using dual rail representation. In both cases we study the one-shot capacity of the channel and its behaviour in terms of correlation between losses and polarization mixedness.

  12. Note: Wide-operating-range control for thermoelectric coolers.

    PubMed

    Peronio, P; Labanca, I; Ghioni, M; Rech, I

    2017-11-01

    A new algorithm for controlling the temperature of a thermoelectric cooler is proposed. Unlike a classic proportional-integral-derivative (PID) control, which computes the bias voltage from the temperature error, the proposed algorithm exploits the linear relation that exists between the cold side's temperature and the amount of heat that is removed per unit time. Since this control is based on an existing linear relation, it is insensitive to changes in the operating point that are instead crucial in classic PID control of a non-linear system.

  13. Note: Wide-operating-range control for thermoelectric coolers

    NASA Astrophysics Data System (ADS)

    Peronio, P.; Labanca, I.; Ghioni, M.; Rech, I.

    2017-11-01

    A new algorithm for controlling the temperature of a thermoelectric cooler is proposed. Unlike a classic proportional-integral-derivative (PID) control, which computes the bias voltage from the temperature error, the proposed algorithm exploits the linear relation that exists between the cold side's temperature and the amount of heat that is removed per unit time. Since this control is based on an existing linear relation, it is insensitive to changes in the operating point that are instead crucial in classic PID control of a non-linear system.

  14. On the convergence of a discrete Kirchhoff triangle method valid for shells of arbitrary shape

    NASA Astrophysics Data System (ADS)

    Bernadou, Michel; Eiroa, Pilar Mato; Trouve, Pascal

    1994-10-01

    In a recent paper by the same authors, we have thoroughly described how to extend to the case of general shells the well known DKT (discrete Kirchhoff triangle) methods which are now classically used to solve plate problems. In that paper we have also detailed how to realize the implementation and reported some numerical results obtained for classical benchmarks. The aim of this paper is to prove the convergence of a closely related method and to obtain corresponding error estimates.

  15. Dopamine neurons share common response function for reward prediction error

    PubMed Central

    Eshel, Neir; Tian, Ju; Bukwich, Michael; Uchida, Naoshige

    2016-01-01

    Dopamine neurons are thought to signal reward prediction error, or the difference between actual and predicted reward. How dopamine neurons jointly encode this information, however, remains unclear. One possibility is that different neurons specialize in different aspects of prediction error; another is that each neuron calculates prediction error in the same way. We recorded from optogenetically-identified dopamine neurons in the lateral ventral tegmental area (VTA) while mice performed classical conditioning tasks. Our tasks allowed us to determine the full prediction error functions of dopamine neurons and compare them to each other. We found striking homogeneity among individual dopamine neurons: their responses to both unexpected and expected rewards followed the same function, just scaled up or down. As a result, we could describe both individual and population responses using just two parameters. Such uniformity ensures robust information coding, allowing each dopamine neuron to contribute fully to the prediction error signal. PMID:26854803

  16. Cophasing techniques for extremely large telescopes

    NASA Astrophysics Data System (ADS)

    Devaney, Nicholas; Schumacher, Achim

    2004-07-01

    The current designs of the majority of ELTs envisage that at least the primary mirror will be segmented. Phasing of the segments is therefore a major concern, and a lot of work is underway to determine the most suitable techniques. The techniques which have been developed are either wave optics generalizations of classical geometric optics tests (e.g. Shack-Hartmann and curvature sensing) or direct interferometric measurements. We present a review of the main techniques proposed for phasing and outline their relative merits. We consider problems which are specific to ELTs, e.g. vignetting of large parts of the primary mirror by the secondary mirror spiders, and the need to disentangle phase errors arising in different segmented mirrors. We present improvements in the Shack-Hartmann and curvature sensing techniques which allow greater precision and range. Finally, we describe a piston plate which simulates segment phasing errors and show the results of laboratory experiments carried out to verify the precision of the Shack-Hartmann technique.

  17. Wavefront error measurement of the concave ellipsoidal mirrors of the METIS coronagraph on ESA Solar Orbiter mission

    NASA Astrophysics Data System (ADS)

    Sandri, P.

    2017-12-01

    The paper describes the alignment technique developed for the wavefront error measurement of ellipsoidal mirrors presenting a central hole. The achievement of a good alignment with a classic setup at the finite conjugates when mirrors are uncoated cannot be based on the identification and materialization at naked eye of the retro-reflected spot by the mirror under test as the intensity of the retro-reflected spot results to be ≈1E-3 of the intensity of the injected laser beam of the interferometer. We present the technique developed for the achievement of an accurate alignment in the setup at the finite conjugate even in condition of low intensity based on the use of an autocollimator adjustable in focus position and a small polished flat surface on the rear side of the mirror. The technique for the alignment has successfully been used for the optical test of the concave ellipsoidal mirrors of the METIS coronagraph of the ESA Solar Orbiter mission. The presented method results to be advantageous in terms of precision and of time saving also when the mirrors are reflective coated and integrated into their mechanical hardware.

  18. Past observable dynamics of a continuously monitored qubit

    NASA Astrophysics Data System (ADS)

    García-Pintos, Luis Pedro; Dressel, Justin

    2017-12-01

    Monitoring a quantum observable continuously in time produces a stochastic measurement record that noisily tracks the observable. For a classical process, such noise may be reduced to recover an average signal by minimizing the mean squared error between the noisy record and a smooth dynamical estimate. We show that for a monitored qubit, this usual procedure returns unusual results. While the record seems centered on the expectation value of the observable during causal generation, examining the collected past record reveals that it better approximates a moving-mean Gaussian stochastic process centered at a distinct (smoothed) observable estimate. We show that this shifted mean converges to the real part of a generalized weak value in the time-continuous limit without additional postselection. We verify that this smoothed estimate minimizes the mean squared error even for individual measurement realizations. We go on to show that if a second observable is weakly monitored concurrently, then that second record is consistent with the smoothed estimate of the second observable based solely on the information contained in the first observable record. Moreover, we show that such a smoothed estimate made from incomplete information can still outperform estimates made using full knowledge of the causal quantum state.

  19. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    PubMed

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  20. Detecting determinism with improved sensitivity in time series: Rank-based nonlinear predictability score

    NASA Astrophysics Data System (ADS)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  1. Increasing accuracy in the interval analysis by the improved format of interval extension based on the first order Taylor series

    NASA Astrophysics Data System (ADS)

    Li, Yi; Xu, Yan Long

    2018-05-01

    When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.

  2. High-order noise filtering in nontrivial quantum logic gates.

    PubMed

    Green, Todd; Uys, Hermann; Biercuk, Michael J

    2012-07-13

    Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.

  3. Using generalizability theory to develop clinical assessment protocols.

    PubMed

    Preuss, Richard A

    2013-04-01

    Clinical assessment protocols must produce data that are reliable, with a clinically attainable minimal detectable change (MDC). In a reliability study, generalizability theory has 2 advantages over classical test theory. These advantages provide information that allows assessment protocols to be adjusted to match individual patient profiles. First, generalizability theory allows the user to simultaneously consider multiple sources of measurement error variance (facets). Second, it allows the user to generalize the findings of the main study across the different study facets and to recalculate the reliability and MDC based on different combinations of facet conditions. In doing so, clinical assessment protocols can be chosen based on minimizing the number of measures that must be taken to achieve a realistic MDC, using repeated measures to minimize the MDC, or simply based on the combination that best allows the clinician to monitor an individual patient's progress over a specified period of time.

  4. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  5. Study of the Fine-Scale Structure of Cumulus Clouds.

    NASA Astrophysics Data System (ADS)

    Rodi, Alfred R.

    Small cumulus clouds are studied using data from an instrumented aircraft. Two aspects of the role of turbulence and mixing in these couds are examined: (1) the effect of mixing on the droplet size distribution, and (2) the effect of turbulence on the spread of ice crystal plumes artificially generated with cloud seeding agents. The data were collected in the course of the Bureau of Reclamation's High Plains Cooperative Experiment (HIPLEX) in Montana in the summers of 1978-80 by the University of Wyoming King Air aircraft. The shape of the cloud droplet spectrum as measured by the Particle Measuring Systems (PMS) Forward Scattering Spectrometer Probe (FSSP) is found to be very sensitive to entrainment of dry environmental air into the cloud. The narrowest cloud droplet spectra, the highest droplet concentrations, and the largest sized droplets are found in the cloud parcels which are least affected by entrainment. The most dilute regions of cloud exhibit the broadest spectra which are frequently bimodal. A procedure for measuring cloud inhomogeneity from FSSP is developed. The data shows that the clouds are extremely inhomogeneous in structure. Current models of inhomogeneous mixing are shown to be inadequate in explaining droplet spectrum effects. However, the inhomogeneous models characterize the data far better than classical models of droplet spectrum evolution. High resolution measurements of ice crystals from the PMS two dimensional imaging probe are used to characterize the spread of the ice crystal plume in seeded clouds. Plume spread is found to be a very complicated process which is in some cases dominated by organized motions in the cloud. As a result, classical diffusion theory is often inadequate to predict plume growth. The turbulent diffusion that occurs is shown to be best modeled using the relative diffusion concept of Richardson. Procedures for adapting aircraft data to the relative diffusion model are developed, including techniques for converting the aircraft Eulerian data into estimates of Lagrangian correlations. Predictions of the model are compared with observations of plume growth. A detailed analysis of errors in the air motion sensing system on the aircraft is presented. A procedure is developed to estimate the errors due to aircraft gyroscope sensitivity to horizontal accelerations.

  6. Anonymous broadcasting of classical information with a continuous-variable topological quantum code

    NASA Astrophysics Data System (ADS)

    Menicucci, Nicolas C.; Baragiola, Ben Q.; Demarie, Tommaso F.; Brennen, Gavin K.

    2018-03-01

    Broadcasting information anonymously becomes more difficult as surveillance technology improves, but remarkably, quantum protocols exist that enable provably traceless broadcasting. The difficulty is making scalable entangled resource states that are robust to errors. We propose an anonymous broadcasting protocol that uses a continuous-variable surface-code state that can be produced using current technology. High squeezing enables large transmission bandwidth and strong anonymity, and the topological nature of the state enables local error mitigation.

  7. Efficient Variational Quantum Simulator Incorporating Active Error Minimization

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2017-04-01

    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.

  8. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less

  9. Experimental multiplexing of quantum key distribution with classical optical communication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Liu-Jun; Chen, Luo-Kan; Ju, Lei

    2015-02-23

    We demonstrate the realization of quantum key distribution (QKD) when combined with classical optical communication, and synchronous signals within a single optical fiber. In the experiment, the classical communication sources use Fabry-Pérot (FP) lasers, which are implemented extensively in optical access networks. To perform QKD, multistage band-stop filtering techniques are developed, and a wavelength-division multiplexing scheme is designed for the multi-longitudinal-mode FP lasers. We have managed to maintain sufficient isolation among the quantum channel, the synchronous channel and the classical channels to guarantee good QKD performance. Finally, the quantum bit error rate remains below a level of 2% across themore » entire practical application range. The proposed multiplexing scheme can ensure low classical light loss, and enables QKD over fiber lengths of up to 45 km simultaneously when the fibers are populated with bidirectional FP laser communications. Our demonstration paves the way for application of QKD to current optical access networks, where FP lasers are widely used by the end users.« less

  10. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  11. Quantification of immobilized Candida antarctica lipase B (CALB) using ICP-AES combined with Bradford method.

    PubMed

    Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L

    2017-02-01

    The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Improving the Numerical Stability of Fast Matrix Multiplication

    DOE PAGES

    Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...

    2016-10-04

    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less

  13. Estimating suspended sediment load with multivariate adaptive regression spline, teaching-learning based optimization, and artificial bee colony models.

    PubMed

    Yilmaz, Banu; Aras, Egemen; Nacar, Sinan; Kankal, Murat

    2018-05-23

    The functional life of a dam is often determined by the rate of sediment delivery to its reservoir. Therefore, an accurate estimate of the sediment load in rivers with dams is essential for designing and predicting a dam's useful lifespan. The most credible method is direct measurements of sediment input, but this can be very costly and it cannot always be implemented at all gauging stations. In this study, we tested various regression models to estimate suspended sediment load (SSL) at two gauging stations on the Çoruh River in Turkey, including artificial bee colony (ABC), teaching-learning-based optimization algorithm (TLBO), and multivariate adaptive regression splines (MARS). These models were also compared with one another and with classical regression analyses (CRA). Streamflow values and previously collected data of SSL were used as model inputs with predicted SSL data as output. Two different training and testing dataset configurations were used to reinforce the model accuracy. For the MARS method, the root mean square error value was found to range between 35% and 39% for the test two gauging stations, which was lower than errors for other models. Error values were even lower (7% to 15%) using another dataset. Our results indicate that simultaneous measurements of streamflow with SSL provide the most effective parameter for obtaining accurate predictive models and that MARS is the most accurate model for predicting SSL. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Combining experimental and simulation data of molecular processes via augmented Markov models.

    PubMed

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  15. Underestimation of length by subjects in motion.

    PubMed

    Harte, D B

    1975-10-01

    To check a prior observation, in the present experiment, subjects made estimates of the lengths of both the guidelines and the spaces between guidelines on automotive highways so the magnitude of the illusion could be more accurately determined. Ten males and ten females were individually tested at 0 and 60 mph. At 60 mph, spaces were estimated with an error of 85%; lines were estimated with an error of 72%. Combining data for both stimuli, an error of 78% results, which corresponds to underestimation by a factor of 4.67. This illusory effect is considerably greater than that of the moon illusion, considered by many the most powerful of the classical illusions.

  16. Differences in the accommodation stimulus response curves of adult myopes and emmetropes: a summary and update.

    PubMed

    Schmid, Katrina L; Strang, Niall C

    2015-11-01

    To provide a summary of the classic paper "Differences in the accommodation stimulus response curves of adult myopes and emmetropes" published in Ophthalmic and Physiological Optics in 1998 and to provide an update on the topic of accommodation errors in myopia. The accommodation responses of 33 participants (10 emmetropes, 11 early onset myopes and 12 late onset myopes) aged 18-31 years were measured using the Canon Autoref R-1 free space autorefractor using three methods to vary the accommodation demand: decreasing distance (4 m to 0.25 cm), negative lenses (0 to -4 D at 4 m) and positive lenses (+4 to 0 D at 0.25 m). We observed that the greatest accommodation errors occurred for the negative lens method whereas minimal errors were observed using positive lenses. Adult progressing myopes had greater lags of accommodation than stable myopes at higher demands induced by negative lenses. Progressing myopes had shallower response gradients than the emmetropes and stable myopes; however the reduced gradient was much less than that observed in children using similar methods. This paper has been often cited as evidence that accommodation responses at near may be primarily reduced in adults with progressing myopia and not in stable myopes and/or that challenging accommodation stimuli (negative lenses with monocular viewing) are required to generate larger accommodation errors. As an analogy, animals reared with hyperopic errors develop axial elongation and myopia. Retinal defocus signals are presumably passed to the retinal pigment epithelium and choroid and then ultimately the sclera to modify eye length. A number of lens treatments that act to slow myopia progression may partially work through reducing accommodation errors. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  17. Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.

    PubMed

    Hele, Timothy J H; Ananth, Nandini

    2016-12-22

    We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.

  18. Design considerations for a suboptimal Kalman filter

    NASA Astrophysics Data System (ADS)

    Difilippo, D. J.

    1995-06-01

    In designing a suboptimal Kalman filter, the designer must decide how to simplify the system error model without causing the filter estimation errors to increase to unacceptable levels. Deletion of certain error states and decoupling of error state dynamics are the two principal model simplifications that are commonly used in suboptimal filter design. For the most part, the decisions as to which error states can be deleted or decoupled are based on the designer's understanding of the physics of the particular system. Consequently, the details of a suboptimal design are usually unique to the specific application. In this paper, the process of designing a suboptimal Kalman filter is illustrated for the case of an airborne transfer-of-alignment (TOA) system used for synthetic aperture radar (SAR) motion compensation. In this application, the filter must continuously transfer the alignment of an onboard Doppler-damped master inertial navigation system (INS) to a strapdown navigator that processes information from a less accurate inertial measurement unit (IMU) mounted on the radar antenna. The IMU is used to measure spurious antenna motion during the SAR imaging interval, so that compensating phase corrections can be computed and applied to the radar returns, thereby presenting image degradation that would otherwise result from such motions. The principles of SAR are described in many references, for instance. The primary function of the TOA Kalman filter in a SAR motion compensation system is to control strapdown navigator attitude errors, and to a less degree, velocity and heading errors. Unlike a classical navigation application, absolute positional accuracy is not important. The motion compensation requirements for SAR imaging are discussed in some detail. This TOA application is particularly appropriate as a vehicle for discussing suboptimal filter design, because the system contains features that can be exploited to allow both deletion and decoupling of error states. In Section 2, a high-level background description of a SAR motion compensation system that incorporates a TOA Kalman filter is given. The optimal TOA filter design is presented in Section 3 with some simulation results to indicate potential filter performance. In Section 4, the suboptimal Kalman filter configuration is derived. Simulation results are also shown in this section to allow comparision between suboptimal and optimal filter performances. Conclusions are contained in Section 5.

  19. The complexity of personality: advantages of a genetically sensitive multi-group design.

    PubMed

    Hahn, Elisabeth; Spinath, Frank M; Siedler, Thomas; Wagner, Gert G; Schupp, Jürgen; Kandler, Christian

    2012-03-01

    Findings from many behavioral genetic studies utilizing the classical twin design suggest that genetic and non-shared environmental effects play a significant role in human personality traits. This study focuses on the methodological advantages of extending the sampling frame to include multiple dyads of relatives. We investigated the sensitivity of heritability estimates to the inclusion of sibling pairs, mother-child pairs and grandparent-grandchild pairs from the German Socio-Economic Panel Study in addition to a classical German twin sample consisting of monozygotic- and dizygotic twins. The resulting dataset contained 1.308 pairs, including 202 monozygotic and 147 dizygotic twin pairs, along with 419 sibling pairs, 438 mother-child dyads, and 102 grandparent-child dyads. This genetically sensitive multi-group design allowed the simultaneous testing of additive and non-additive genetic, common and specific environmental effects, including cultural transmission and twin-specific environmental influences. Using manifest and latent modeling of phenotypes (i.e., controlling for measurement error), we compare results from the extended sample with those from the twin sample alone and discuss implications for future research.

  20. Research Prototype: Automated Analysis of Scientific and Engineering Semantics

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.

  1. Robust iterative learning contouring controller with disturbance observer for machine tool feed drives.

    PubMed

    Simba, Kenneth Renny; Bui, Ba Dinh; Msukwa, Mathew Renny; Uchiyama, Naoki

    2018-04-01

    In feed drive systems, particularly machine tools, a contour error is more significant than the individual axial tracking errors from the view point of enhancing precision in manufacturing and production systems. The contour error must be within the permissible tolerance of given products. In machining complex or sharp-corner products, large contour errors occur mainly owing to discontinuous trajectories and the existence of nonlinear uncertainties. Therefore, it is indispensable to design robust controllers that can enhance the tracking ability of feed drive systems. In this study, an iterative learning contouring controller consisting of a classical Proportional-Derivative (PD) controller and disturbance observer is proposed. The proposed controller was evaluated experimentally by using a typical sharp-corner trajectory, and its performance was compared with that of conventional controllers. The results revealed that the maximum contour error can be reduced by about 37% on average. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Silver-Ion Solid Phase Extraction Separation of Classical, Aromatic, Oxidized, and Heteroatomic Naphthenic Acids from Oil Sands Process-Affected Water.

    PubMed

    Huang, Rongfu; Chen, Yuan; Gamal El-Din, Mohamed

    2016-06-21

    The separation of classical, aromatic, oxidized, and heteroatomic (sulfur-containing) naphthenic acid (NA) species from unprocessed and ozone-treated oil sands process-affected water (OSPW) was performed using silver-ion (Ag-ion) solid phase extraction (SPE) without the requirement of pre-methylation for NAs. OSPW samples before SPE and SPE fractions were characterized using ultra performance liquid chromatography ion mobility time-of-flight mass spectrometry (UPLC-IM-TOFMS) to corroborate the separation of distinct NA species. The mass spectrum identification applied a mass tolerance of ±1.5 mDa due to the mass errors of NAs were measured within this range, allowing the identification of O2S-NAs from O2-NAs. Moreover, separated NA species facilitated the tandem mass spectrometry (MS/MS) characterization of NA compounds due to the removal of matrix and a simplified composition. MS/MS results showed that classical, aromatic, oxidized, and sulfur-containing NA compounds were eluted into individual SPE fractions. Overall results indicated that the separation of NA species using Ag-ion SPE is a valuable method for extracting individual NA species that are of great interest for environmental toxicology and wastewater treatment research, to conduct species-specific studies. Furthermore, the separated NA species on the milligram level could be widely used as the standard materials for environmental monitoring of NAs from various contamination sites.

  3. Parametric models to compute tryptophan fluorescence wavelengths from classical protein simulations.

    PubMed

    Lopez, Alvaro J; Martínez, Leandro

    2018-02-26

    Fluorescence spectroscopy is an important method to study protein conformational dynamics and solvation structures. Tryptophan (Trp) residues are the most important and practical intrinsic probes for protein fluorescence due to the variability of their fluorescence wavelengths: Trp residues emit in wavelengths ranging from 308 to 360 nm depending on the local molecular environment. Fluorescence involves electronic transitions, thus its computational modeling is a challenging task. We show that it is possible to predict the wavelength of emission of a Trp residue from classical molecular dynamics simulations by computing the solvent-accessible surface area or the electrostatic interaction between the indole group and the rest of the system. Linear parametric models are obtained to predict the maximum emission wavelengths with standard errors of the order 5 nm. In a set of 19 proteins with emission wavelengths ranging from 308 to 352 nm, the best model predicts the maximum wavelength of emission with a standard error of 4.89 nm and a quadratic Pearson correlation coefficient of 0.81. These models can be used for the interpretation of fluorescence spectra of proteins with multiple Trp residues, or for which local Trp environmental variability exists and can be probed by classical molecular dynamics simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  4. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  5. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    NASA Astrophysics Data System (ADS)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  6. Performance Characterization of an Instrument.

    ERIC Educational Resources Information Center

    Salin, Eric D.

    1984-01-01

    Describes an experiment designed to teach students to apply the same statistical awareness to instrumentation they commonly apply to classical techniques. Uses propagation of error techniques to pinpoint instrumental limitations and breakdowns and to demonstrate capabilities and limitations of volumetric and gravimetric methods. Provides lists of…

  7. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  8. Clustering of reads with alignment-free measures and quality values.

    PubMed

    Comin, Matteo; Leoni, Andrea; Schimd, Michele

    2015-01-01

    The data volume generated by Next-Generation Sequencing (NGS) technologies is growing at a pace that is now challenging the storage and data processing capacities of modern computer systems. In this context an important aspect is the reduction of data complexity by collapsing redundant reads in a single cluster to improve the run time, memory requirements, and quality of post-processing steps like assembly and error correction. Several alignment-free measures, based on k-mers counts, have been used to cluster reads. Quality scores produced by NGS platforms are fundamental for various analysis of NGS data like reads mapping and error detection. Moreover future-generation sequencing platforms will produce long reads but with a large number of erroneous bases (up to 15 %). In this scenario it will be fundamental to exploit quality value information within the alignment-free framework. To the best of our knowledge this is the first study that incorporates quality value information and k-mers counts, in the context of alignment-free measures, for the comparison of reads data. Based on this principles, in this paper we present a family of alignment-free measures called D (q) -type. A set of experiments on simulated and real reads data confirms that the new measures are superior to other classical alignment-free statistics, especially when erroneous reads are considered. Also results on de novo assembly and metagenomic reads classification show that the introduction of quality values improves over standard alignment-free measures. These statistics are implemented in a software called QCluster (http://www.dei.unipd.it/~ciompin/main/qcluster.html).

  9. Verbal suppression and strategy use: a role for the right lateral prefrontal cortex?

    PubMed

    Robinson, Gail A; Cipolotti, Lisa; Walker, David G; Biggs, Vivien; Bozzali, Marco; Shallice, Tim

    2015-04-01

    Verbal initiation, suppression and strategy generation/use are cognitive processes widely held to be supported by the frontal cortex. The Hayling Test was designed to tap these cognitive processes within the same sentence completion task. There are few studies specifically investigating the neural correlates of the Hayling Test but it has been primarily used to detect frontal lobe damage. This study investigates the components of the Hayling Test in a large sample of patients with unselected focal frontal (n = 60) and posterior (n = 30) lesions. Patients and controls (n = 40) matched for education, age and sex were administered the Hayling Test as well as background cognitive tests. The standard Hayling Test clinical measures (initiation response time, suppression response time, suppression errors and overall score), composite errors scores and strategy-based responses were calculated. Lesions were analysed by classical frontal/posterior subdivisions as well as a finer-grained frontal localization method and a specific contrast method that is somewhat analogous to voxel-based lesion mapping methods. Thus, patients with right lateral, left lateral and superior medial lesions were compared to controls and patients with right lateral lesions were compared to all other patients. The results show that all four standard Hayling Test clinical measures are sensitive to frontal lobe damage although only the suppression error and overall scores were specific to the frontal region. Although all frontal patients produced blatant suppression errors, a specific right lateral frontal effect was revealed for producing errors that were subtly wrong. In addition, frontal patients overall produced fewer correct responses indicative of developing an appropriate strategy but only the right lateral group showed a significant deficit. This problem in strategy attainment and implementation could explain, at least in part, the suppression error impairment. Contrary to previous studies there was no specific frontal effect for verbal initiation. Overall, our results support a role for the right lateral frontal region in verbal suppression and, for the first time, in strategy generation/use. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for characterizing the error in precipitation by SM2RAIN would be highly useful for the merging and the integration steps in its algorithm, i.e., the merging of multiple soil moisture derived products (e.g., SMAP, SMOS, ASCAT) and the integration of soil moisture derived and state of the art satellite precipitation products (e.g., GPM IMERG).

  11. Measurement of process variables in solid-state fermentation of wheat straw using FT-NIR spectroscopy and synergy interval PLS algorithm.

    PubMed

    Jiang, Hui; Liu, Guohai; Mei, Congli; Yu, Shuang; Xiao, Xiahong; Ding, Yuhan

    2012-11-01

    The feasibility of rapid determination of the process variables (i.e. pH and moisture content) in solid-state fermentation (SSF) of wheat straw using Fourier transform near infrared (FT-NIR) spectroscopy was studied. Synergy interval partial least squares (siPLS) algorithm was implemented to calibrate regression model. The number of PLS factors and the number of subintervals were optimized simultaneously by cross-validation. The performance of the prediction model was evaluated according to the root mean square error of cross-validation (RMSECV), the root mean square error of prediction (RMSEP) and the correlation coefficient (R). The measurement results of the optimal model were obtained as follows: RMSECV=0.0776, R(c)=0.9777, RMSEP=0.0963, and R(p)=0.9686 for pH model; RMSECV=1.3544% w/w, R(c)=0.8871, RMSEP=1.4946% w/w, and R(p)=0.8684 for moisture content model. Finally, compared with classic PLS and iPLS models, the siPLS model revealed its superior performance. The overall results demonstrate that FT-NIR spectroscopy combined with siPLS algorithm can be used to measure process variables in solid-state fermentation of wheat straw, and NIR spectroscopy technique has a potential to be utilized in SSF industry. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  13. A methodology for design of a linear referencing system for surface transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vonderohe, A.; Hepworth, T.

    1997-06-01

    The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less

  14. Notes on power of normality tests of error terms in regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Střelec, Luboš

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less

  15. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models

    NASA Astrophysics Data System (ADS)

    Caimmi, R.

    2011-08-01

    Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both heteroscedastic and homoscedastic data. Conversely, samples related to different methods produce discrepant results, due to the presence of (still undetected) systematic errors, which implies no definitive statement can be made at present. A comparison is also made between different expressions of regression line slope and intercept variance estimators, where fractional discrepancies are found to be not exceeding a few percent, which grows up to about 20% in the presence of large dispersion data. An extension of the formalism to structural models is left to a forthcoming paper.

  16. Fact or fallacy? Immunisation arguments in the New Zealand print media.

    PubMed

    Petousis-Harris, Helen A; Goodyear-Smith, Felicity A; Kameshwar, Kamya; Turner, Nikki

    2010-10-01

    To explore New Zealand's four major daily newspapers' coverage of immunisation with regards to errors of fact and fallacy in construction of immunisation-related arguments. All articles from 2002 to 2007 were assessed for errors of fact and logic. Fact was defined as that which was supported by the most current evidence-based medical literature. Errors of logic were assessed using a classical taxonomy broadly based in Aristotle's classifications. Numerous errors of both fact and logic were identified, predominantly used by anti-immunisation proponents, but occasionally by health authorities. The proportion of media articles reporting exclusively fact changes over time during the life of a vaccine where new vaccines incur little fallacious reporting and established vaccines generate inaccurate claims. Fallacious arguments can be deconstructed and classified into a classical taxonomy including non sequitur and argumentum ad Hominem. Most media 'balance' given to immunisation relies on 'he said, she said' arguments using quotes from opposing spokespersons with a failure to verify the scientific validity of both the material and the source. Health professionals and media need training so that recognising and critiquing public health arguments becomes accepted practice: stronger public relations strategies should challenge poor quality articles to journalists' code of ethics and the health sector needs to be proactive in predicting and pre-empting the expected responses to introduction of new public health initiatives such as a new vaccine. © 2010 The Authors. Journal Compilation © 2010 Public Health Association of Australia.

  17. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  18. Three-Dimensional Wiring for Extensible Quantum Computing: The Quantum Socket

    NASA Astrophysics Data System (ADS)

    Béjanin, J. H.; McConkey, T. G.; Rinehart, J. R.; Earnest, C. T.; McRae, C. R. H.; Shiri, D.; Bateman, J. D.; Rohanizadegan, Y.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.; Mariantoni, M.

    2016-10-01

    Quantum computing architectures are on the verge of scalability, a key requirement for the implementation of a universal quantum computer. The next stage in this quest is the realization of quantum error-correction codes, which will mitigate the impact of faulty quantum information on a quantum computer. Architectures with ten or more quantum bits (qubits) have been realized using trapped ions and superconducting circuits. While these implementations are potentially scalable, true scalability will require systems engineering to combine quantum and classical hardware. One technology demanding imminent efforts is the realization of a suitable wiring method for the control and the measurement of a large number of qubits. In this work, we introduce an interconnect solution for solid-state qubits: the quantum socket. The quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two-dimensional methods based on wire bonding. The quantum socket is based on spring-mounted microwires—the three-dimensional wires—that push directly on a microfabricated chip, making electrical contact. A small wire cross section (approximately 1 mm), nearly nonmagnetic components, and functionality at low temperatures make the quantum socket ideal for operating solid-state qubits. The wires have a coaxial geometry and operate over a frequency range from dc to 8 GHz, with a contact resistance of approximately 150 m Ω , an impedance mismatch of approximately 10 Ω , and minimal cross talk. As a proof of principle, we fabricate and use a quantum socket to measure high-quality superconducting resonators at a temperature of approximately 10 mK. Quantum error-correction codes such as the surface code will largely benefit from the quantum socket, which will make it possible to address qubits located on a two-dimensional lattice. The present implementation of the socket could be readily extended to accommodate a quantum processor with a (10 ×10 )-qubit lattice, which would allow for the realization of a simple quantum memory.

  19. Complement dependent cytotoxicity (CDC) activity of a humanized anti Lewis-Y antibody: FACS-based assay versus the 'classical' radioactive method -- qualification, comparison and application of the FACS-based approach.

    PubMed

    Nechansky, A; Szolar, O H J; Siegl, P; Zinoecker, I; Halanek, N; Wiederkum, S; Kircheis, R

    2009-05-01

    The fully humanized Lewis-Y carbohydrate specific monoclonal antibody (mAb) IGN311 is currently tested in a passive immunotherapy approach in a clinical phase I trail and therefore regulatory requirements demand qualified assays for product analysis. To demonstrate the functionality of its Fc-region, the capacity of IGN311 to mediate complement dependent cytotoxicity (CDC) against human breast cancer cells was evaluated. The "classical" radioactive method using chromium-51 and a FACS-based assay were established and qualified according to ICH guidelines. Parameters evaluated were specificity, response function, bias, repeatability (intra-day precision), intermediate precision (operator-time different), and linearity (assay range). In the course of a fully nested design, a four-parameter logistic equation was identified as appropriate calibration model for both methods. For the radioactive assay, the bias ranged from -6.1% to -3.6%. The intermediate precision for future means of duplicate measurements revealed values from 12.5% to 15.9% and the total error (beta-expectation tolerance interval) of the method was found to be <40%. For the FACS-based assay, the bias ranged from -8.3% to 0.6% and the intermediate precision for future means of duplicate measurements revealed values from 4.2% to 8.0%. The total error of the method was found to be <25%. The presented data demonstrate that the FACS-based CDC is more accurate than the radioactive assay. Also, the elimination of radioactivity and the 'real-time' counting of apoptotic cells further justifies the implementation of this method which was subsequently applied for testing the influence of storage at 4 degrees C and 25 degrees C ('stability testing') on the potency of IGN311 drug product. The obtained results demonstrate that the qualified functional assay represents a stability indicating test method.

  20. 3-D Survey Applied to Industrial Archaeology by Tls Methodology

    NASA Astrophysics Data System (ADS)

    Monego, M.; Fabris, M.; Menin, A.; Achilli, V.

    2017-05-01

    This work describes the three-dimensional survey of "Ex Stazione Frigorifera Specializzata": initially used for agricultural storage, during the years it was allocated to different uses until the complete neglect. The historical relevance and the architectural heritage that this building represents has brought the start of a recent renovation project and functional restoration. In this regard it was necessary a global 3-D survey that was based on the application and integration of different geomatic methodologies (mainly terrestrial laser scanner, classical topography, and GNSS). The acquisitions of point clouds was performed using different laser scanners: with time of flight (TOF) and phase shift technologies for the distance measurements. The topographic reference network, needed for scans alignment in the same system, was measured with a total station. For the complete survey of the building, 122 scans were acquired and 346 targets were measured from 79 vertices of the reference network. Moreover, 3 vertices were measured with GNSS methodology in order to georeference the network. For the detail survey of machine room were executed 14 scans with 23 targets. The 3-D global model of the building have less than one centimeter of error in the alignment (for the machine room the error in alignment is not greater than 6 mm) and was used to extract products such as longitudinal and transversal sections, plans, architectural perspectives, virtual scans. A complete spatial knowledge of the building is obtained from the processed data, providing basic information for restoration project, structural analysis, industrial and architectural heritage valorization.

  1. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  2. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  3. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    PubMed

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  4. Spherical subjective refraction with a novel 3D virtual reality based system.

    PubMed

    Pujol, Jaume; Ondategui-Parra, Juan Carlos; Badiella, Llorenç; Otero, Carles; Vilaseca, Meritxell; Aldaba, Mikel

    To conduct a clinical validation of a virtual reality-based experimental system that is able to assess the spherical subjective refraction simplifying the methodology of ocular refraction. For the agreement assessment, spherical refraction measurements were obtained from 104 eyes of 52 subjects using three different methods: subjectively with the experimental prototype (Subj.E) and the classical subjective refraction (Subj.C); and objectively with the WAM-5500 autorefractor (WAM). To evaluate precision (intra- and inter-observer variability) of each refractive tool independently, 26 eyes were measured in four occasions. With regard to agreement, the mean difference (±SD) for the spherical equivalent (M) between the new experimental subjective method (Subj.E) and the classical subjective refraction (Subj.C) was -0.034D (±0.454D). The corresponding 95% Limits of Agreement (LoA) were (-0.856D, 0.924D). In relation to precision, intra-observer mean difference for the M component was 0.034±0.195D for the Subj.C, 0.015±0.177D for the WAM and 0.072±0.197D for the Subj.E. Inter-observer variability showed worse precision values, although still clinically valid (below 0.25D) in all instruments. The spherical equivalent obtained with the new experimental system was precise and in good agreement with the classical subjective routine. The algorithm implemented in this new system and its optical configuration has been shown to be a first valid step for spherical error correction in a semiautomated way. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  5. Inhibition, Conflict Detection, and Number Conservation

    ERIC Educational Resources Information Center

    Lubin, Amélie; Simon, Grégory; Houdé, Olivier; De Neys, Wim

    2015-01-01

    The acquisition of number conservation is a critical step in children's numerical and mathematical development. Classic developmental studies have established that children's number conservation is often biased by misleading intuitions. However, the precise nature of these conservation errors is not clear. A key question is whether conservation…

  6. Low-Latency Digital Signal Processing for Feedback and Feedforward in Quantum Computing and Communication

    NASA Astrophysics Data System (ADS)

    Salathé, Yves; Kurpiers, Philipp; Karg, Thomas; Lang, Christian; Andersen, Christian Kraglund; Akin, Abdulkadir; Krinner, Sebastian; Eichler, Christopher; Wallraff, Andreas

    2018-03-01

    Quantum computing architectures rely on classical electronics for control and readout. Employing classical electronics in a feedback loop with the quantum system allows us to stabilize states, correct errors, and realize specific feedforward-based quantum computing and communication schemes such as deterministic quantum teleportation. These feedback and feedforward operations are required to be fast compared to the coherence time of the quantum system to minimize the probability of errors. We present a field-programmable-gate-array-based digital signal processing system capable of real-time quadrature demodulation, a determination of the qubit state, and a generation of state-dependent feedback trigger signals. The feedback trigger is generated with a latency of 110 ns with respect to the timing of the analog input signal. We characterize the performance of the system for an active qubit initialization protocol based on the dispersive readout of a superconducting qubit and discuss potential applications in feedback and feedforward algorithms.

  7. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  8. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  9. [Measurement properties of self-report questionnaires published in Korean nursing journals].

    PubMed

    Lee, Eun-Hyun; Kim, Chun-Ja; Kim, Eun Jung; Chae, Hyun-Ju; Cho, Soo-Yeon

    2013-02-01

    The purpose of this study was to evaluate measurement properties of self-report questionnaires for studies published in Korean nursing journals. Of 424 Korean nursing articles initially identified, 168 articles met the inclusion criteria. The methodological quality of the measurements used in the studies and interpretability were assessed using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. It consists of items on internal consistency, reliability, measurement error, content validity, construct validity including structural validity, hypothesis testing, cross-cultural validity, and criterion validity, and responsiveness. For each item of the COSMIN checklist, measurement properties are rated on a four-point scale: excellent, good, fair, and poor. Each measurement property is scored with worst score counts. All articles used the classical test theory for measurement properties. Internal consistency (72.6%), construct validity (56.5%), and content validity (38.2%) were most frequently reported properties being rated as 'excellent' by COSMIN checklist, whereas other measurement properties were rarely reported. A systematic review of measurement properties including interpretability of most instruments warrants further research and nursing-focused checklists assessing measurement properties should be developed to facilitate intervention outcomes across Korean studies.

  10. Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence

    NASA Astrophysics Data System (ADS)

    Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen

    2005-11-01

    Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China

  11. Stature estimation equations for South Asian skeletons based on DXA scans of contemporary adults.

    PubMed

    Pomeroy, Emma; Mushrif-Tripathy, Veena; Wells, Jonathan C K; Kulkarni, Bharati; Kinra, Sanjay; Stock, Jay T

    2018-05-03

    Stature estimation from the skeleton is a classic anthropological problem, and recent years have seen the proliferation of population-specific regression equations. Many rely on the anatomical reconstruction of stature from archaeological skeletons to derive regression equations based on long bone lengths, but this requires a collection with very good preservation. In some regions, for example, South Asia, typical environmental conditions preclude the sufficient preservation of skeletal remains. Large-scale epidemiological studies that include medical imaging of the skeleton by techniques such as dual-energy X-ray absorptiometry (DXA) offer new potential datasets for developing such equations. We derived estimation equations based on known height and bone lengths measured from DXA scans from the Andhra Pradesh Children and Parents Study (Hyderabad, India). Given debates on the most appropriate regression model to use, multiple methods were compared, and the performance of the equations was tested on a published skeletal dataset of individuals with known stature. The equations have standard errors of estimates and prediction errors similar to those derived using anatomical reconstruction or from cadaveric datasets. As measured by the number of significant differences between true and estimated stature, and the prediction errors, the new equations perform as well as, and generally better than, published equations commonly used on South Asian skeletons or based on Indian cadaveric datasets. This study demonstrates the utility of DXA scans as a data source for developing stature estimation equations and offer a new set of equations for use with South Asian datasets. © 2018 Wiley Periodicals, Inc.

  12. Spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data.

    PubMed

    Sayago, Ana; Asuero, Agustin G

    2006-09-14

    A bilogarithmic hyperbolic cosine method for the spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data has been devised and applied to literature data. A weighting scheme, however, is necessary in order to take into account the transformation for linearization. The method may be considered a useful alternative to methods in which one variable is involved on both sides of the basic equation (i.e. Heller and Schwarzenbach, Likussar and Adsul and Ramanathan). Classical least squares lead in those instances to biased and approximate stability constants and limiting absorbance values. The advantages of the proposed method are: the method gives a clear indication of the existence of only one complex in solution, it is flexible enough to allow for weighting of measurements and the computation procedure yield the best value of logbeta11 and its limit of error. The agreement between the values obtained by applying the weighted hyperbolic cosine method and the non-linear regression (NLR) method is good, being in both cases the mean quadratic error at a minimum.

  13. Effective Inertial Frame in an Atom Interferometric Test of the Equivalence Principle

    NASA Astrophysics Data System (ADS)

    Overstreet, Chris; Asenbaum, Peter; Kovachy, Tim; Notermans, Remy; Hogan, Jason M.; Kasevich, Mark A.

    2018-05-01

    In an ideal test of the equivalence principle, the test masses fall in a common inertial frame. A real experiment is affected by gravity gradients, which introduce systematic errors by coupling to initial kinematic differences between the test masses. Here we demonstrate a method that reduces the sensitivity of a dual-species atom interferometer to initial kinematics by using a frequency shift of the mirror pulse to create an effective inertial frame for both atomic species. Using this method, we suppress the gravity-gradient-induced dependence of the differential phase on initial kinematic differences by 2 orders of magnitude and precisely measure these differences. We realize a relative precision of Δ g /g ≈6 ×10-11 per shot, which improves on the best previous result for a dual-species atom interferometer by more than 3 orders of magnitude. By reducing gravity gradient systematic errors to one part in 1 013 , these results pave the way for an atomic test of the equivalence principle at an accuracy comparable with state-of-the-art classical tests.

  14. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  15. Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case

    PubMed Central

    Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique

    2017-01-01

    This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144

  16. Real-time imaging of quantum entanglement.

    PubMed

    Fickler, Robert; Krenn, Mario; Lapkiewicz, Radek; Ramelow, Sven; Zeilinger, Anton

    2013-01-01

    Quantum Entanglement is widely regarded as one of the most prominent features of quantum mechanics and quantum information science. Although, photonic entanglement is routinely studied in many experiments nowadays, its signature has been out of the grasp for real-time imaging. Here we show that modern technology, namely triggered intensified charge coupled device (ICCD) cameras are fast and sensitive enough to image in real-time the effect of the measurement of one photon on its entangled partner. To quantitatively verify the non-classicality of the measurements we determine the detected photon number and error margin from the registered intensity image within a certain region. Additionally, the use of the ICCD camera allows us to demonstrate the high flexibility of the setup in creating any desired spatial-mode entanglement, which suggests as well that visual imaging in quantum optics not only provides a better intuitive understanding of entanglement but will improve applications of quantum science.

  17. The value of item response theory in clinical assessment: a review.

    PubMed

    Thomas, Michael L

    2011-09-01

    Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. Although IRT has become prevalent in the measurement of ability and achievement, its contributions to clinical domains have been less extensive. Applications of IRT to clinical assessment are reviewed to appraise its current and potential value. Benefits of IRT include comprehensive analyses and reduction of measurement error, creation of computer adaptive tests, meaningful scaling of latent variables, objective calibration and equating, evaluation of test and item bias, greater accuracy in the assessment of change due to therapeutic intervention, and evaluation of model and person fit. The theory may soon reinvent the manner in which tests are selected, developed, and scored. Although challenges remain to the widespread implementation of IRT, its application to clinical assessment holds great promise. Recommendations for research, test development, and clinical practice are provided.

  18. Real-Time Imaging of Quantum Entanglement

    PubMed Central

    Fickler, Robert; Krenn, Mario; Lapkiewicz, Radek; Ramelow, Sven; Zeilinger, Anton

    2013-01-01

    Quantum Entanglement is widely regarded as one of the most prominent features of quantum mechanics and quantum information science. Although, photonic entanglement is routinely studied in many experiments nowadays, its signature has been out of the grasp for real-time imaging. Here we show that modern technology, namely triggered intensified charge coupled device (ICCD) cameras are fast and sensitive enough to image in real-time the effect of the measurement of one photon on its entangled partner. To quantitatively verify the non-classicality of the measurements we determine the detected photon number and error margin from the registered intensity image within a certain region. Additionally, the use of the ICCD camera allows us to demonstrate the high flexibility of the setup in creating any desired spatial-mode entanglement, which suggests as well that visual imaging in quantum optics not only provides a better intuitive understanding of entanglement but will improve applications of quantum science. PMID:23715056

  19. Questionnaires for Measuring Refractive Surgery Outcomes.

    PubMed

    Kandel, Himal; Khadka, Jyoti; Lundström, Mats; Goggin, Michael; Pesudovs, Konrad

    2017-06-01

    To identify the questionnaires used to assess refractive surgery outcomes, assess the available questionnaires in regard to their psychometric properties, validity, and reliability, and evaluate the performance of the available questionnaires in measuring refractive surgery outcomes. An extensive literature search was done on PubMed, MEDLINE, Scopus, CINAHL, Cochrane, and Web of Science databases to identify articles that described or used at least one questionnaire to assess refractive surgery outcomes. The information on content quality, validity, reliability, responsiveness, and psychometric properties was extracted and analyzed based on an extensive set of quality criteria. Eighty-one articles describing 27 questionnaires (12 refractive error-specific, including 4 refractive surgery-specific, 7 vision-but-non-refractive, and 8 generic) were included in the review. Most articles (56, 69.1%) described refractive error-specific questionnaires. The Quality of Life Impact of Refractive Correction (QIRC), the Quality of Vision (QoV), and the Near Activity Visual Questionnaire (NAVQ) were originally constructed using Rasch analysis; others were developed using the Classical Test Theory. The National Eye Institute Refractive Quality of Life questionnaire was the most frequently used questionnaire, but it does not provide a valid measurement. The QoV, QIRC, and NAVQ are the three best existing questionnaires to assess visual symptoms, quality of life, and activity limitations, respectively. This review identified three superior quality questionnaires for measuring different aspects of quality of life in refractive surgery. Clinicians and researchers should choose a questionnaire based on the concept being measured with superior psychometric properties. [J Refract Surg. 2017;33(6):416-424.]. Copyright 2017, SLACK Incorporated.

  20. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    PubMed

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.

  1. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839

  2. Analytical determination of the heat transfer coefficient for gas, liquid and liquid metal flows in the tube based on stochastic equations and equivalence of measures for continuum

    NASA Astrophysics Data System (ADS)

    Dmitrenko, Artur V.

    2017-11-01

    The stochastic equations of continuum are used for determining the heat transfer coefficients. As a result, the formulas for Nusselt (Nu) number dependent on the turbulence intensity and scale instead of only on the Reynolds (Peclet) number are proposed for the classic flows of a nonisothermal fluid in a round smooth tube. It is shown that the new expressions for the classical heat transfer coefficient Nu, which depend only on the Reynolds number, should be obtained from these new general formulas if to use the well-known experimental data for the initial turbulence. It is found that the limitations of classical empirical and semiempirical formulas for heat transfer coefficients and their deviation from the experimental data depend on different parameters of initial fluctuations in the flow for different experiments in a wide range of Reynolds or Peclet numbers. Based on these new dependences, it is possible to explain that the differences between the experimental results for the fixed Reynolds or Peclet numbers are caused by the difference in values of flow fluctuations for each experiment instead of only due to the systematic error in the experiment processing. Accordingly, the obtained general dependences of Nu for a smooth round tube can serve as the basis for clarifying the experimental results and empirical formulas used for continuum flows in various power devices. Obtained results show that both for isothermal and for nonisothermal flows, the reason for the process of transition from a deterministic state into a turbulent one is determined by the physical law of equivalence of measures between them. Also the theory of stochastic equations and the law of equivalence of measures could determine mechanics which is basis in different phenomena of self-organization and chaos theory.

  3. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  4. Characterization of addressability by simultaneous randomized benchmarking.

    PubMed

    Gambetta, Jay M; Córcoles, A D; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-12-14

    The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.

  5. Computing with a single qubit faster than the computation quantum speed limit

    NASA Astrophysics Data System (ADS)

    Sinitsyn, Nikolai A.

    2018-02-01

    The possibility to save and process information in fundamentally indistinguishable states is the quantum mechanical resource that is not encountered in classical computing. I demonstrate that, if energy constraints are imposed, this resource can be used to accelerate information-processing without relying on entanglement or any other type of quantum correlations. In fact, there are computational problems that can be solved much faster, in comparison to currently used classical schemes, by saving intermediate information in nonorthogonal states of just a single qubit. There are also error correction strategies that protect such computations.

  6. Construction of the Second Quito Astrolabe Catalogue

    NASA Astrophysics Data System (ADS)

    Kolesnik, Y. B.

    1994-03-01

    A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.

  7. Force on an electric/magnetic dipole and classical approach to spin-orbit coupling in hydrogen-like atoms

    NASA Astrophysics Data System (ADS)

    Kholmetskii, A. L.; Missevitch, O. V.; Yarman, T.

    2017-09-01

    We carry out the classical analysis of spin-orbit coupling in hydrogen-like atoms, using the modern expressions for the force and energy of an electric/magnetic dipole in an electromagnetic field. We disclose a novel physical meaning of this effect and show that for a laboratory observer the energy of spin-orbit interaction is represented solely by the mechanical energy of the spinning electron (considered as a gyroscope) due to the Thomas precession of its spin. Concurrently we disclose some errors in the old and new publications on this subject.

  8. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  9. Measurement of Stress Distribution Around a Circular Hole in a Plate Under Bending Moment Using Phase-shifting Method with Reflective Polariscope Arrangement

    NASA Astrophysics Data System (ADS)

    Baek, Tae Hyun

    Photoelasticity is one of the most widely used whole-field optical methods for stress analysis. The technique of birefringent coatings, also called the method of photoelastic coatings, extends the classical procedures of model photoelasticity to the measurement of surface strains in opaque models made of any structural material. Photoelastic phase-shifting method can be used for the determination of the phase values of isochromatics and isoclinics. In this paper, photoelastic phase-shifting technique and conventional Babinet-Soleil compensation method were utilized to analyze a specimen with a triangular hole and a circular hole under bending. Photoelastic phase-shifting technique is whole-field measurement. On the other hand, conventional compensation method is point measurement. Three groups of results were obtained by phase-shifting method with reflective polariscope arrangement, conventional compensation method and FEM simulation, respectively. The results from the first two methods agree with each other relatively well considering experiment error. The advantage of photoelastic phase-shifting method is that it is possible to measure the stress distribution accurately close to the edge of holes.

  10. Quantum chi-squared and goodness of fit testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temme, Kristan; Verstraete, Frank

    2015-01-15

    A quantum mechanical hypothesis test is presented for the hypothesis that a certain setup produces a given quantum state. Although the classical and the quantum problems are very much related to each other, the quantum problem is much richer due to the additional optimization over the measurement basis. A goodness of fit test for i.i.d quantum states is developed and a max-min characterization for the optimal measurement is introduced. We find the quantum measurement which leads both to the maximal Pitman and Bahadur efficiencies, and determine the associated divergence rates. We discuss the relationship of the quantum goodness of fitmore » test to the problem of estimating multiple parameters from a density matrix. These problems are found to be closely related and we show that the largest error of an optimal strategy, determined by the smallest eigenvalue of the Fisher information matrix, is given by the divergence rate of the goodness of fit test.« less

  11. Multiparametric methane sensor for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Borecki, M.; Duk, M.; Kociubiński, A.; Korwin-Pawlowski, M. L.

    2016-12-01

    Today, methane sensors find applications mostly in safety alarm installations, gas parameters detection and air pollution classification. Such sensors and sensors elements exists for industry and home use. Under development area of methane sensors application is dedicated to ground gases monitoring. Proper monitoring of soil gases requires reliable and maintenance-free semi-constant and longtime examination at relatively low cost of equipment. The sensors for soil monitoring have to work on soil probe. Therefore, sensor is exposed to environment conditions, as a wide range of temperatures and a full scale of humidity changes, as well as rain, snow and wind, that are not specified for classical methane sensors. Development of such sensor is presented in this paper. The presented sensor construction consists of five commercial non dispersive infra-red (NDIR) methane sensing units, a set of temperature and humidity sensing units, a gas chamber equipped with a micro-fan, automated gas valves and also a microcontroller that controls the measuring procedure. The electronics part of sensor was installed into customized 3D printed housing equipped with self-developed gas valves. The main development of proposed sensor is on the side of experimental evaluation of construction reliability and results of data processing included safety procedures and function for hardware error correction. Redundant methane sensor units are used providing measurement error correction as well as improved measurement accuracy. The humidity and temperature sensors are used for internal compensation of methane measurements as well as for cutting-off the sensor from the environment when the conditions exceed allowable parameters. Results obtained during environment sensing prove that the gas concentration readings are not sensitive to gas chamber vertical or horizontal position. It is important as vertical sensor installation on soil probe is simpler that horizontal one. Data acquired during six month of environment monitoring prove that error correction of methane sensing units was essential for maintenance free sensor operation, despite used safety procedures.

  12. Robust linear discriminant models to solve financial crisis in banking sectors

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni

    2014-12-01

    Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.

  13. Modelling individual differences in the form of Pavlovian conditioned approach responses: a dual learning systems approach with factored representations.

    PubMed

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi

    2014-02-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational neuroscience studies may be useful.

  14. Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations

    PubMed Central

    Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B.; Robinson, Terry E.; Khamassi, Mehdi

    2014-01-01

    Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational neuroscience studies may be useful. PMID:24550719

  15. Design of a Torque Current Generator for Strapdown Gyroscopes. Ph.D. Thesis; [and performance prediction

    NASA Technical Reports Server (NTRS)

    Mcknight, R. D.; Blalock, T. V.; Kennedy, E. J.

    1974-01-01

    The design, analysis, and experimental evaluation of an optimum performance torque current generator for use with strapdown gyroscopes, is presented. Among the criteria used to evaluate the design were the following: (1) steady-state accuracy; (2) margins of stability against self-oscillation; (3) temperature variations; (4) aging; (5) static errors drift errors, and transient errors, (6) classical frequency and time domain characteristics; and (7) the equivalent noise at the input of the comparater operational amplifier. The DC feedback loop of the torque current generator was approximated as a second-order system. Stability calculations for gain margins are discussed. Circuit diagrams are shown and block diagrams showing the implementation of the torque current generator are discussed.

  16. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  17. Probability Theory, Not the Very Guide of Life

    ERIC Educational Resources Information Center

    Juslin, Peter; Nilsson, Hakan; Winman, Anders

    2009-01-01

    Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive…

  18. Sex determination from the femur in Portuguese populations with classical and machine-learning classifiers.

    PubMed

    Curate, F; Umbelino, C; Perinha, A; Nogueira, C; Silva, A M; Cunha, E

    2017-11-01

    The assessment of sex is of paramount importance in the establishment of the biological profile of a skeletal individual. Femoral relevance for sex estimation is indisputable, particularly when other exceedingly dimorphic skeletal regions are missing. As such, this study intended to generate population-specific osteometric models for the estimation of sex with the femur and to compare the accuracy of the models obtained through classical and machine-learning classifiers. A set of 15 standard femoral measurements was acquired in a training sample (100 females; 100 males) from the Coimbra Identified Skeletal Collection (University of Coimbra, Portugal) and models for sex classification were produced with logistic regression (LR), linear discriminant analysis (LDA), support vector machines (SVM), and reduce error pruning trees (REPTree). Under cross-validation, univariable sectioning points generated with REPTree correctly estimated sex in 60.0-87.5% of cases (systematic error ranging from 0.0 to 37.0%), while multivariable models correctly classified sex in 84.0-92.5% of cases (bias from 0.0 to 7.0%). All models were assessed in a holdout sample (24 females; 34 males) from the 21st Century Identified Skeletal Collection (University of Coimbra, Portugal), with an allocation accuracy ranging from 56.9 to 86.2% (bias from 4.4 to 67.0%) in the univariable models, and from 84.5 to 89.7% (bias from 3.7 to 23.3%) in the multivariable models. This study makes available a detailed description of sexual dimorphism in femoral linear dimensions in two Portuguese identified skeletal samples, emphasizing the relevance of the femur for the estimation of sex in skeletal remains in diverse conditions of completeness and preservation. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  19. Desert soil clay content estimation using reflectance spectroscopy preprocessed by fractional derivative

    PubMed Central

    Tiyip, Tashpolat; Ding, Jianli; Zhang, Dong; Liu, Wei; Wang, Fei; Tashpolat, Nigara

    2017-01-01

    Effective pretreatment of spectral reflectance is vital to model accuracy in soil parameter estimation. However, the classic integer derivative has some disadvantages, including spectral information loss and the introduction of high-frequency noise. In this paper, the fractional order derivative algorithm was applied to the pretreatment and partial least squares regression (PLSR) was used to assess the clay content of desert soils. Overall, 103 soil samples were collected from the Ebinur Lake basin in the Xinjiang Uighur Autonomous Region of China, and used as data sets for calibration and validation. Following laboratory measurements of spectral reflectance and clay content, the raw spectral reflectance and absorbance data were treated using the fractional derivative order from the 0.0 to the 2.0 order (order interval: 0.2). The ratio of performance to deviation (RPD), determinant coefficients of calibration (Rc2), root mean square errors of calibration (RMSEC), determinant coefficients of prediction (Rp2), and root mean square errors of prediction (RMSEP) were applied to assess the performance of predicting models. The results showed that models built on the fractional derivative order performed better than when using the classic integer derivative. Comparison of the predictive effects of 22 models for estimating clay content, calibrated by PLSR, showed that those models based on the fractional derivative 1.8 order of spectral reflectance (Rc2 = 0.907, RMSEC = 0.425%, Rp2 = 0.916, RMSEP = 0.364%, and RPD = 2.484 ≥ 2.000) and absorbance (Rc2 = 0.888, RMSEC = 0.446%, Rp2 = 0.918, RMSEP = 0.383% and RPD = 2.511 ≥ 2.000) were most effective. Furthermore, they performed well in quantitative estimations of the clay content of soils in the study area. PMID:28934274

  20. Object permanence in adult common marmosets (Callithrix jacchus): not everything is an "A-not-B" error that seems to be one.

    PubMed

    Kis, Anna; Gácsi, Márta; Range, Friederike; Virányi, Zsófia

    2012-01-01

    In this paper, we describe a behaviour pattern similar to the "A-not-B" error found in human infants and young apes in a monkey species, the common marmosets (Callithrix jacchus). In contrast to the classical explanation, recently it has been suggested that the "A-not-B" error committed by human infants is at least partially due to misinterpretation of the hider's ostensively communicated object hiding actions as potential 'teaching' demonstrations during the A trials. We tested whether this so-called Natural Pedagogy hypothesis would account for the A-not-B error that marmosets commit in a standard object permanence task, but found no support for the hypothesis in this species. Alternatively, we present evidence that lower level mechanisms, such as attention and motivation, play an important role in committing the "A-not-B" error in marmosets. We argue that these simple mechanisms might contribute to the effect of undeveloped object representational skills in other species including young non-human primates that commit the A-not-B error.

  1. Portable and Error-Free DNA-Based Data Storage.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica

    2017-07-10

    DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.

  2. Quantum Clique Gossiping.

    PubMed

    Li, Bo; Li, Shuang; Wu, Junfeng; Qi, Hongsheng

    2018-02-09

    This paper establishes a framework of quantum clique gossiping by introducing local clique operations to networks of interconnected qubits. Cliques are local structures in complex networks being complete subgraphs, which can be used to accelerate classical gossip algorithms. Based on cyclic permutations, clique gossiping leads to collective multi-party qubit interactions. We show that at reduced states, these cliques have the same acceleration effects as their roles in accelerating classical gossip algorithms. For randomized selection of cliques, such improved rate of convergence is precisely characterized. On the other hand, the rate of convergence at the coherent states of the overall quantum network is proven to be decided by the spectrum of a mean-square error evolution matrix. Remarkably, the use of larger quantum cliques does not necessarily increase the speed of the network density aggregation, suggesting quantum network dynamics is not entirely decided by its classical topology.

  3. White matter changes in an untreated, newly diagnosed case of classical homocystinuria.

    PubMed

    Brenton, J Nicholas; Matsumoto, Julie A; Rust, Robert S; Wilson, William G

    2014-01-01

    The authors report the case of a 4-year-old boy who developed progressive unilateral weakness and developmental delays prior to his diagnosis of classical homocystinuria. Magnetic resonance imaging (MRI) of the brain demonstrated diffuse white matter changes, raising the concern for a secondary diagnosis causing leukoencephalopathy, since classical homocystinuria is not typically associated with these changes. Other inborn errors of the transsulfuration pathway have been reported as causing these changes. Once begun on therapy for his homocystinuria, his neurologic deficits resolved and his delays rapidly improved. Repeat MRI performed one year after instating therapy showed resolution of his white matter abnormalities. This case illustrates the need to consider homocystinuria and other amino acidopathies in the differential diagnosis of childhood white matter diseases and lends weight to the hypothesis that hypermethioninemia may induce white matter changes.

  4. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  5. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  6. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.

    PubMed

    Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo

    2013-11-13

    Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.

  7. Predicting seasonal influenza transmission using functional regression models with temporal dependence.

    PubMed

    Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela

    2018-01-01

    This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.

  8. Impact of Uncertainties in Exposure Assessment on Thyroid Cancer Risk among Persons in Belarus Exposed as Children or Adolescents Due to the Chernobyl Accident.

    PubMed

    Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir

    2015-01-01

    The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.

  9. Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly

    PubMed Central

    Kim, Miyong T.; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B.; Jang, Yuri

    2015-01-01

    The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N=1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin’s Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted. PMID:26049971

  10. Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly.

    PubMed

    Kim, Miyong T; Lee, Ju-Young; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B; Jang, Yuri

    2015-09-01

    The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N = 1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin's Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted.

  11. A quantum extended Kalman filter

    NASA Astrophysics Data System (ADS)

    Emzir, Muhammad F.; Woolley, Matthew J.; Petersen, Ian R.

    2017-06-01

    In quantum physics, a stochastic master equation (SME) estimates the state (density operator) of a quantum system in the Schrödinger picture based on a record of measurements made on the system. In the Heisenberg picture, the SME is a quantum filter. For a linear quantum system subject to linear measurements and Gaussian noise, the dynamics may be described by quantum stochastic differential equations (QSDEs), also known as quantum Langevin equations, and the quantum filter reduces to a so-called quantum Kalman filter. In this article, we introduce a quantum extended Kalman filter (quantum EKF), which applies a commutative approximation and a time-varying linearization to systems of nonlinear QSDEs. We will show that there are conditions under which a filter similar to a classical EKF can be implemented for quantum systems. The boundedness of estimation errors and the filtering problem with ‘state-dependent’ covariances for process and measurement noises are also discussed. We demonstrate the effectiveness of the quantum EKF by applying it to systems that involve multiple modes, nonlinear Hamiltonians, and simultaneous jump-diffusive measurements.

  12. Macroscopic superpositions and gravimetry with quantum magnetomechanics.

    PubMed

    Johnsson, Mattias T; Brennen, Gavin K; Twamley, Jason

    2016-11-21

    Precision measurements of gravity can provide tests of fundamental physics and are of broad practical interest for metrology. We propose a scheme for absolute gravimetry using a quantum magnetomechanical system consisting of a magnetically trapped superconducting resonator whose motion is controlled and measured by a nearby RF-SQUID or flux qubit. By driving the mechanical massive resonator to be in a macroscopic superposition of two different heights our we predict that our interferometry protocol could, subject to systematic errors, achieve a gravimetric sensitivity of Δg/g ~ 2.2 × 10 -10  Hz -1/2 , with a spatial resolution of a few nanometres. This sensitivity and spatial resolution exceeds the precision of current state of the art atom-interferometric and corner-cube gravimeters by more than an order of magnitude, and unlike classical superconducting interferometers produces an absolute rather than relative measurement of gravity. In addition, our scheme takes measurements at ~10 kHz, a region where the ambient vibrational noise spectrum is heavily suppressed compared the ~10 Hz region relevant for current cold atom gravimeters.

  13. Macroscopic superpositions and gravimetry with quantum magnetomechanics

    PubMed Central

    Johnsson, Mattias T.; Brennen, Gavin K.; Twamley, Jason

    2016-01-01

    Precision measurements of gravity can provide tests of fundamental physics and are of broad practical interest for metrology. We propose a scheme for absolute gravimetry using a quantum magnetomechanical system consisting of a magnetically trapped superconducting resonator whose motion is controlled and measured by a nearby RF-SQUID or flux qubit. By driving the mechanical massive resonator to be in a macroscopic superposition of two different heights our we predict that our interferometry protocol could, subject to systematic errors, achieve a gravimetric sensitivity of Δg/g ~ 2.2 × 10−10 Hz−1/2, with a spatial resolution of a few nanometres. This sensitivity and spatial resolution exceeds the precision of current state of the art atom-interferometric and corner-cube gravimeters by more than an order of magnitude, and unlike classical superconducting interferometers produces an absolute rather than relative measurement of gravity. In addition, our scheme takes measurements at ~10 kHz, a region where the ambient vibrational noise spectrum is heavily suppressed compared the ~10 Hz region relevant for current cold atom gravimeters. PMID:27869142

  14. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  15. Preliminary results for RR Lyrae stars and Classical Cepheids from the Vista Magellanic Cloud (VMC) survey

    NASA Astrophysics Data System (ADS)

    Ripepi, V.; Moretti, M. I.; Clementini, G.; Marconi, M.; Cioni, M. R.; Marquette, J. B.; Tisserand, P.

    2012-09-01

    The Vista Magellanic Cloud (VMC, PI M.R. Cioni) survey is collecting K S -band time series photometry of the system formed by the two Magellanic Clouds (MC) and the "bridge" that connects them. These data are used to build K S -band light curves of the MC RR Lyrae stars and Classical Cepheids and determine absolute distances and the 3D geometry of the whole system using the K-band period luminosity ( PLK S ), the period-luminosity-color ( PLC) and the Wesenhiet relations applicable to these types of variables. As an example of the survey potential we present results from the VMC observations of two fields centered respectively on the South Ecliptic Pole and the 30 Doradus star forming region of the Large Magellanic Cloud. The VMC K S -band light curves of the RR Lyrae stars in these two regions have very good photometric quality with typical errors for the individual data points in the range of ˜0.02 to 0.05 mag. The Cepheids have excellent light curves (typical errors of ˜0.01 mag). The average K S magnitudes derived for both types of variables were used to derive PLK S relations that are in general good agreement within the errors with the literature data, and show a smaller scatter than previous studies.

  16. Frequency-Offset Cartesian Feedback Based on Polyphase Difference Amplifiers

    PubMed Central

    Zanchi, Marta G.; Pauly, John M.; Scott, Greig C.

    2010-01-01

    A modified Cartesian feedback method called “frequency-offset Cartesian feedback” and based on polyphase difference amplifiers is described that significantly reduces the problems associated with quadrature errors and DC-offsets in classic Cartesian feedback power amplifier control systems. In this method, the reference input and feedback signals are down-converted and compared at a low intermediate frequency (IF) instead of at DC. The polyphase difference amplifiers create a complex control bandwidth centered at this low IF, which is typically offset from DC by 200–1500 kHz. Consequently, the loop gain peak does not overlap DC where voltage offsets, drift, and local oscillator leakage create errors. Moreover, quadrature mismatch errors are significantly attenuated in the control bandwidth. Since the polyphase amplifiers selectively amplify the complex signals characterized by a +90° phase relationship representing positive frequency signals, the control system operates somewhat like single sideband (SSB) modulation. However, the approach still allows the same modulation bandwidth control as classic Cartesian feedback. In this paper, the behavior of the polyphase difference amplifier is described through both the results of simulations, based on a theoretical analysis of their architecture, and experiments. We then describe our first printed circuit board prototype of a frequency-offset Cartesian feedback transmitter and its performance in open and closed loop configuration. This approach should be especially useful in magnetic resonance imaging transmit array systems. PMID:20814450

  17. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  18. Classical verification of quantum circuits containing few basis changes

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.; Ouyang, Yingkai; Fitzsimons, Joseph F.

    2018-04-01

    We consider the task of verifying the correctness of quantum computation for a restricted class of circuits which contain at most two basis changes. This contains circuits giving rise to the second level of the Fourier hierarchy, the lowest level for which there is an established quantum advantage. We show that when the circuit has an outcome with probability at least the inverse of some polynomial in the circuit size, the outcome can be checked in polynomial time with bounded error by a completely classical verifier. This verification procedure is based on random sampling of computational paths and is only possible given knowledge of the likely outcome.

  19. Comment on “The two dimensional motion of a particle in an inverse square potential: Classical and quantum aspects” [J. Math. Phys. 54, 053509 (2013)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bietenholz, Wolfgang, E-mail: wolbi@nucleares.unam.mx; Chryssomalakos, Chryssomalis, E-mail: chryss@nucleares.unam.mx; Salgado, Marcelo, E-mail: marcelo@nucleares.unam.mx

    We comment on a fatal flaw in the analysis contained in the work of Martínez-y-Romero et al., [J. Math. Phys. 54, 053509 (2013)], which concerns the motion of a point particle in an inverse square potential, and show that most conclusions reached there are wrong. In particular, the manifestly senseless claim that, in the attractive potential case, no bounded orbits exist for negative energies, is traced to a sign error. Several more mistakes, both in the classical and the quantum cases, are pointed out.

  20. Inertia effects in thin film flow with a corrugated boundary

    NASA Technical Reports Server (NTRS)

    Serbetci, Ilter; Tichy, John A.

    1991-01-01

    An analytical solution is presented for two-dimensional, incompressible film flow between a sinusoidally grooved (or rough) surface and a flat-surface. The upper grooved surface is stationary whereas the lower, smooth surface moves with a constant speed. The Navier-Stokes equations were solved employing both mapping techniques and perturbation expansions. Due to the inclusion of the inertia effects, a different pressure distribution is obtained than predicted by the classical lubrication theory. In particular, the amplitude of the pressure distribution of the classical lubrication theory is found to be in error by over 100 perent (for modified Reynolds number of 3-4).

  1. Demands on Finite Cognitive Capacity Cause Infants' Perseverative Errors

    ERIC Educational Resources Information Center

    Berger, Sarah E.

    2004-01-01

    This research unites traditionally disparate developmental domains--cognition and locomotion--to examine the classic cognitive issue of the development of inhibition in infancy. In 2 locomotor A-not-B tasks, 13-month-old walking infants inhibited a prepotent response under low task demands (walking on flat ground), but perseverated under increased…

  2. A Strategy for Replacing Sum Scoring

    ERIC Educational Resources Information Center

    Ramsay, James O.; Wiberg, Marie

    2017-01-01

    This article promotes the use of modern test theory in testing situations where sum scores for binary responses are now used. It directly compares the efficiencies and biases of classical and modern test analyses and finds an improvement in the root mean squared error of ability estimates of about 5% for two designed multiple-choice tests and…

  3. STTEP by STEPP in the Spirit of "Umuntu Ungumuntu Ngabantu"

    ERIC Educational Resources Information Center

    van Niekerk, Caroline; Typpo, Maria

    2012-01-01

    In a recent article, "Sttepping in the right direction? Western classical music in an orchestral programme for disadvantaged African youth," "sttepping" was noted as no spelling error. The same applies here; reference is to STTEP Music School, an outreach project at the University of Pretoria. STTEP teaches the playing of…

  4. Methods for Estimating Uncertainty in PMF Solutions: Examples with Ambient Air and Water Quality Data and Guidance on Reporting PMF Results

    EPA Science Inventory

    The new version of EPA’s positive matrix factorization (EPA PMF) software, 5.0, includes three error estimation (EE) methods for analyzing factor analytic solutions: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement (BS-DISP)...

  5. BMDS: A Collection of R Functions for Bayesian Multidimensional Scaling

    ERIC Educational Resources Information Center

    Okada, Kensuke; Shigemasu, Kazuo

    2009-01-01

    Bayesian multidimensional scaling (MDS) has attracted a great deal of attention because: (1) it provides a better fit than do classical MDS and ALSCAL; (2) it provides estimation errors of the distances; and (3) the Bayesian dimension selection criterion, MDSIC, provides a direct indication of optimal dimensionality. However, Bayesian MDS is not…

  6. An Efficient Quantum Somewhat Homomorphic Symmetric Searchable Encryption

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Wang, Ting; Sun, Zhiwei; Wang, Ping; Yu, Jianping; Xie, Weixin

    2017-04-01

    In 2009, Gentry first introduced an ideal lattices fully homomorphic encryption (FHE) scheme. Later, based on the approximate greatest common divisor problem, learning with errors problem or learning with errors over rings problem, FHE has developed rapidly, along with the low efficiency and computational security. Combined with quantum mechanics, Liang proposed a symmetric quantum somewhat homomorphic encryption (QSHE) scheme based on quantum one-time pad, which is unconditional security. And it was converted to a quantum fully homomorphic encryption scheme, whose evaluation algorithm is based on the secret key. Compared with Liang's QSHE scheme, we propose a more efficient QSHE scheme for classical input states with perfect security, which is used to encrypt the classical message, and the secret key is not required in the evaluation algorithm. Furthermore, an efficient symmetric searchable encryption (SSE) scheme is constructed based on our QSHE scheme. SSE is important in the cloud storage, which allows users to offload search queries to the untrusted cloud. Then the cloud is responsible for returning encrypted files that match search queries (also encrypted), which protects users' privacy.

  7. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  8. Moment-Tensor Spectra of Source Physics Experiments (SPE) Explosions in Granite

    NASA Astrophysics Data System (ADS)

    Yang, X.; Cleveland, M.

    2016-12-01

    We perform frequency-domain moment tensor inversions of Source Physics Experiments (SPE) explosions conducted in granite during Phase I of the experiment. We test the sensitivity of source moment-tensor spectra to factors such as the velocity model, selected dataset and smoothing and damping parameters used in the inversion to constrain the error bound of inverted source spectra. Using source moments and corner frequencies measured from inverted source spectra of these explosions, we develop a new explosion P-wave source model that better describes observed source spectra of these small and over-buried chemical explosions detonated in granite than classical explosion source models derived mainly from nuclear-explosion data. In addition to source moment and corner frequency, we analyze other features in the source spectra to investigate their physical causes.

  9. Only complementary voices tell the truth: a reevaluation of validity in multi-informant approaches of child and adolescent clinical assessments.

    PubMed

    Kaurin, Aleksandra; Egloff, Boris; Stringaris, Argyris; Wessa, Michèle

    2016-08-01

    Multi-informant approaches are thought to be key to clinical assessment. Classical theories of psychological measurements assume that only convergence among different informants' reports allows for an estimate of the true nature and causes of clinical presentations. However, the integration of multiple accounts is fraught with problems because findings in child and adolescent psychiatry do not conform to the fundamental expectation of convergence. Indeed, reports provided by different sources (self, parents, teachers, peers) share little variance. Moreover, in some cases informant divergence may be meaningful and not error variance. In this review, we give an overview of conceptual and theoretical foundations of valid multi-informant assessment and discuss why our common concepts of validity need revaluation.

  10. A Practical Model of Quartz Crystal Microbalance in Actual Applications.

    PubMed

    Huang, Xianhe; Bai, Qingsong; Hu, Jianguo; Hou, Dong

    2017-08-03

    A practical model of quartz crystal microbalance (QCM) is presented, which considers both the Gaussian distribution characteristic of mass sensitivity and the influence of electrodes on the mass sensitivity. The equivalent mass sensitivity of 5 MHz and 10 MHz AT-cut QCMs with different sized electrodes were calculated according to this practical model. The equivalent mass sensitivity of this practical model is different from the Sauerbrey's mass sensitivity, and the error between them increases sharply as the electrode radius decreases. A series of experiments which plate rigid gold film onto QCMs were carried out and the experimental results proved this practical model is more valid and correct rather than the classical Sauerbrey equation. The practical model based on the equivalent mass sensitivity is convenient and accurate in actual measurements.

  11. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  12. New Splitting Criteria for Decision Trees in Stationary Data Streams.

    PubMed

    Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Rutkowski, Leszek; Duda, Piotr; Jaworski, Maciej

    2018-06-01

    The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality. Therefore, there is an urgent need to develop new algorithms, which are both mathematically justified and characterized by good performance. In this paper, we address this problem by developing a family of new splitting criteria for classification in stationary data streams and investigating their probabilistic properties. The new criteria, derived using appropriate statistical tools, are based on the misclassification error and the Gini index impurity measures. The general division of splitting criteria into two types is proposed. Attributes chosen based on type- splitting criteria guarantee, with high probability, the highest expected value of split measure. Type- criteria ensure that the chosen attribute is the same, with high probability, as it would be chosen based on the whole infinite data stream. Moreover, in this paper, two hybrid splitting criteria are proposed, which are the combinations of single criteria based on the misclassification error and Gini index.

  13. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  14. Inconsistent-handed advantage in episodic memory extends to paragraph-level materials.

    PubMed

    Prichard, Eric C; Christman, Stephen D

    2017-09-01

    Past research using handedness as a proxy for functional access to the right hemisphere demonstrates that individuals who are mixed/inconsistently handed outperform strong/consistently handed individuals when performing episodic recall tasks. However, research has generally been restricted to stimuli presented in a list format. In the present paper, we present two studies in which participants were presented with paragraph-level material and then asked to recall material from the passages. The first study was based on a classic study looking at retroactive interference with prose materials. The second was modelled on a classic experiment looking at perspective taking and the content of memory. In both studies, the classic effects were replicated and the general finding that mixed/inconsistent-handers outperform strong/consistent-handers was replicated. This suggests that considering degree of handedness may be an empirically useful means of reducing error variance in paradigms looking at memory for prose level material.

  15. Stability Assessment and Tuning of an Adaptively Augmented Classical Controller for Launch Vehicle Flight Control

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.

    2014-01-01

    Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.

  16. How much a quantum measurement is informative?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels, Barcelona; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia

    2014-12-04

    The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.

  17. Physician's error: medical or legal concept?

    PubMed

    Mujovic-Zornic, Hajrija M

    2010-06-01

    This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.

  18. Feedback control laws for highly maneuverable aircraft

    NASA Technical Reports Server (NTRS)

    Garrard, William L.; Balas, Gary J.

    1994-01-01

    During the first half of the year, the investigators concentrated their efforts on completing the design of control laws for the longitudinal axis of the HARV. During the second half of the year they concentrated on the synthesis of control laws for the lateral-directional axes. The longitudinal control law design efforts can be briefly summarized as follows. Longitudinal control laws were developed for the HARV using mu synthesis design techniques coupled with dynamic inversion. An inner loop dynamic inversion controller was used to simplify the system dynamics by eliminating the aerodynamic nonlinearities and inertial cross coupling. Models of the errors resulting from uncertainties in the principal longitudinal aerodynamic terms were developed and included in the model of the HARV with the inner loop dynamic inversion controller. This resulted in an inner loop transfer function model which was an integrator with the modeling errors characterized as uncertainties in gain and phase. Outer loop controllers were then designed using mu synthesis to provide robustness to these modeling errors and give desired response to pilot inputs. Both pitch rate and angle of attack command following systems were designed. The following tasks have been accomplished for the lateral-directional controllers: inner and outer loop dynamic inversion controllers have been designed; an error model based on a linearized perturbation model of the inner loop system was derived; controllers for the inner loop system have been designed, using classical techniques, that control roll rate and Dutch roll response; the inner loop dynamic inversion and classical controllers have been implemented on the six degree of freedom simulation; and lateral-directional control allocation scheme has been developed based on minimizing required control effort.

  19. Reexamination of Induction Heating of Primitive Bodies in Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Menzel, Raymond L.; Roberge, Wayne G.

    2013-10-01

    We reexamine the unipolar induction mechanism for heating asteroids originally proposed in a classic series of papers by Sonett and collaborators. As originally conceived, induction heating is caused by the "motional electric field" that appears in the frame of an asteroid immersed in a fully ionized, magnetized solar wind and drives currents through its interior. However, we point out that classical induction heating contains a subtle conceptual error, in consequence of which the electric field inside the asteroid was calculated incorrectly. The problem is that the motional electric field used by Sonett et al. is the electric field in the freely streaming plasma far from the asteroid; in fact, the motional field vanishes at the asteroid surface for realistic assumptions about the plasma density. In this paper we revisit and improve the induction heating scenario by (1) correcting the conceptual error by self-consistently calculating the electric field in and around the boundary layer at the asteroid-plasma interface; (2) considering weakly ionized plasmas consistent with current ideas about protoplanetary disks; and (3) considering more realistic scenarios that do not require a fully ionized, powerful T Tauri wind in the disk midplane. We present exemplary solutions for two highly idealized flows that show that the interior electric field can either vanish or be comparable to the fields predicted by classical induction depending on the flow geometry. We term the heating driven by these flows "electrodynamic heating," calculate its upper limits, and compare them to heating produced by short-lived radionuclides.

  20. Theoretical studies of the potential surface for the F - H2 greater than HF + H reaction

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Walch, Stephen, P.; Langhoff, Stephen R.; Taylor, Peter R.; Jaffe, Richard L.

    1987-01-01

    The F + H2 yields HF + H potential energy hypersurface was studied in the saddle point and entrance channel regions. Using a large (5s 5p 3d 2f 1g/4s 3p 2d) atomic natural orbital basis set, a classical barrier height of 1.86 kcal/mole was obtained at the CASSCF/multireference CI level (MRCI) after correcting for basis set superposition error and including a Davidson correction (+Q) for higher excitations. Based upon an analysis of the computed results, the true classical barrier is estimated to be about 1.4 kcal/mole. The location of the bottleneck on the lowest vibrationally adiabatic potential curve was also computed and the translational energy threshold determined from a one-dimensional tunneling calculation. Using the difference between the calculated and experimental threshold to adjust the classical barrier height on the computed surface yields a classical barrier in the range of 1.0 to 1.5 kcal/mole. Combining the results of the direct estimates of the classical barrier height with the empirical values obtained from the approximation calculations of the dynamical threshold, it is predicted that the true classical barrier height is 1.4 + or - 0.4 kcal/mole. Arguments are presented in favor of including the relatively large +Q correction obtained when nine electrons are correlated at the CASSCF/MRCI level.

  1. Nonclassicality of Temporal Correlations.

    PubMed

    Brierley, Stephen; Kosowski, Adrian; Markiewicz, Marcin; Paterek, Tomasz; Przysiężna, Anna

    2015-09-18

    The results of spacelike separated measurements are independent of distant measurement settings, a property one might call two-way no-signaling. In contrast, timelike separated measurements are only one-way no-signaling since the past is independent of the future but not vice versa. For this reason some temporal correlations that are formally identical to nonclassical spatial correlations can still be modeled classically. We propose a new formulation of Bell's theorem for temporal correlations; namely, we define nonclassical temporal correlations as the ones which cannot be simulated by propagating in time the classical information content of a quantum system given by the Holevo bound. We first show that temporal correlations between results of any projective quantum measurements on a qubit can be simulated classically. Then we present a sequence of general measurements on a single m-level quantum system that cannot be explained by propagating in time an m-level classical system and using classical computers with unlimited memory.

  2. A systematic uncertainty analysis for liner impedance eduction technology

    NASA Astrophysics Data System (ADS)

    Zhou, Lin; Bodén, Hans

    2015-11-01

    The so-called impedance eduction technology is widely used for obtaining acoustic properties of liners used in aircraft engines. The measurement uncertainties for this technology are still not well understood though it is essential for data quality assessment and model validation. A systematic framework based on multivariate analysis is presented in this paper to provide 95 percent confidence interval uncertainty estimates in the process of impedance eduction. The analysis is made using a single mode straightforward method based on transmission coefficients involving the classic Ingard-Myers boundary condition. The multivariate technique makes it possible to obtain an uncertainty analysis for the possibly correlated real and imaginary parts of the complex quantities. The results show that the errors in impedance results at low frequency mainly depend on the variability of transmission coefficients, while the mean Mach number accuracy is the most important source of error at high frequencies. The effect of Mach numbers used in the wave dispersion equation and in the Ingard-Myers boundary condition has been separated for comparison of the outcome of impedance eduction. A local Mach number based on friction velocity is suggested as a way to reduce the inconsistencies found when estimating impedance using upstream and downstream acoustic excitation.

  3. Evaluating the validity of the Work Role Functioning Questionnaire (Canadian French version) using classical test theory and item response theory.

    PubMed

    Hong, Quan Nha; Coutu, Marie-France; Berbiche, Djamal

    2017-01-01

    The Work Role Functioning Questionnaire (WRFQ) was developed to assess workers' perceived ability to perform job demands and is used to monitor presenteeism. Still few studies on its validity can be found in the literature. The purpose of this study was to assess the items and factorial composition of the Canadian French version of the WRFQ (WRFQ-CF). Two measurement approaches were used to test the WRFQ-CF: Classical Test Theory (CTT) and non-parametric Item Response Theory (IRT). A total of 352 completed questionnaires were analyzed. A four-factor and three-factor model models were tested and shown respectively good fit with 14 items (Root Mean Square Error of Approximation (RMSEA) = 0.06, Standardized Root Mean Square Residual (SRMR) = 0.04, Bentler Comparative Fit Index (CFI) = 0.98) and with 17 items (RMSEA = 0.059, SRMR = 0.048, CFI = 0.98). Using IRT, 13 problematic items were identified, of which 9 were common with CTT. This study tested different models with fewer problematic items found in a three-factor model. Using a non-parametric IRT and CTT for item purification gave complementary results. IRT is still scarcely used and can be an interesting alternative method to enhance the quality of a measurement instrument. More studies are needed on the WRFQ-CF to refine its items and factorial composition.

  4. Continuous quantum measurement and the quantum to classical transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Tanmoy; Habib, Salman; Jacobs, Kurt

    2003-04-01

    While ultimately they are described by quantum mechanics, macroscopic mechanical systems are nevertheless observed to follow the trajectories predicted by classical mechanics. Hence, in the regime defining macroscopic physics, the trajectories of the correct classical motion must emerge from quantum mechanics, a process referred to as the quantum to classical transition. Extending previous work [Bhattacharya, Habib, and Jacobs, Phys. Rev. Lett. 85, 4852 (2000)], here we elucidate this transition in some detail, showing that once the measurement processes that affect all macroscopic systems are taken into account, quantum mechanics indeed predicts the emergence of classical motion. We derive inequalities thatmore » describe the parameter regime in which classical motion is obtained, and provide numerical examples. We also demonstrate two further important properties of the classical limit: first, that multiple observers all agree on the motion of an object, and second, that classical statistical inference may be used to correctly track the classical motion.« less

  5. Evaluating Students' Conceptual Understanding of Balanced Equations and Stoichiometric Ratios Using a Particulate Drawing

    ERIC Educational Resources Information Center

    Sanger, Michael J.

    2005-01-01

    A total of 156 students were asked to provide free-response balanced chemical equations for a classic multiple-choice particulate-drawing question first used by Nurrenbern and Pickering. The balanced equations and the number of students providing each equation are reported in this study. The most common student errors included a confusion between…

  6. An Old Problem with a New Solution, Raising Classical Questions: A Commentary on Humphry

    ERIC Educational Resources Information Center

    Heene, Moritz

    2011-01-01

    Humphry (this issue) deserves credit for drawing attention to the long-neglected fact that differences in item discrimination parameters are often due to empirical factors and not the product of random error components. In doing so, Humphry offers a psychometrically elegant, coherent, and practically important new model that is more flexible while…

  7. Competence, Expertise, and Accountability: Classical Foundations of the Cult of Expertise.

    ERIC Educational Resources Information Center

    Schwartzman, Roy

    Rhetoricians since Plato's day have been concerned with how much knowledge speakers should possess in order to speak effectively as well as ethically. The expert, like anyone, can err, but the chance of factual error decreases when speakers have a thorough grasp of their subject matter. However, the expertise position can potentially become a…

  8. LH-independent testosterone secretion is mediated by the interaction between GNRH2 and its receptor within porcine testes

    USDA-ARS?s Scientific Manuscript database

    Unlike the classical gonadotropin-releasing hormone (GNRH1), the second mammalian isoform (GNRH2) is an ineffective stimulant of gonadotropin release. Species that produce GNRH2 may not maintain a functional GNRH2 receptor (GNRHR2) due to coding errors. A full length GNRHR2 gene has been identified ...

  9. First-principles binary diffusion coefficients for H, H 2 and four normal alkanes + N 2

    DOE PAGES

    Jasper, Ahren W.; Kamarchik, Eugene; Miller, James A.; ...

    2014-09-30

    Collision integrals related to binary (dilute gas) diffusion are calculated classically for six species colliding with N 2. The most detailed calculations make no assumptions regarding the complexity of the potential energy surface, and the resulting classical collision integrals are in excellent agreement with previous semiclassical results for H + N 2 and H 2 + N 2 and with recent experimental results for C n H 2n+2 + N 2, n = 2–4. The detailed classical results are used to test the accuracy of three simplifying assumptions typically made when calculating collision integrals: (1) approximating the intermolecular potential asmore » isotropic, (2) neglecting the internal structure of the colliders (i.e., neglecting inelasticity), and (3) employing unphysical R –12 repulsive interactions. The effect of anisotropy is found to be negligible for H + N 2 and H 2 + N 2 (in agreement with previous quantum mechanical and semiclassical results for systems involving atomic and diatomic species) but is more significant for larger species at low temperatures. For example, the neglect of anisotropy decreases the diffusion coefficient for butane + N 2 by 15% at 300 K. The neglect of inelasticity, in contrast, introduces only very small errors. Approximating the repulsive wall as an unphysical R –12 interaction is a significant source of error at all temperatures for the weakly interacting systems H + N 2 and H 2 + N 2, with errors as large as 40%. For the normal alkanes in N 2, which feature stronger interactions, the 12/6 Lennard–Jones approximation is found to be accurate, particularly at temperatures above –700 K where it predicts the full-dimensional result to within 5% (although with somewhat different temperature dependence). Overall, the typical practical approach of assuming isotropic 12/6 Lennard–Jones interactions is confirmed to be suitable for combustion applications except for weakly interacting systems, such as H + N 2. For these systems, anisotropy and inelasticity can safely be neglected but a more detailed description of the repulsive wall is required for quantitative predictions. Moreover, a straightforward approach for calculating effective isotropic potentials with realistic repulsive walls is described. An analytic expression for the calculated diffusion coefficient for H + N 2 is presented and is estimated to have a 2-sigma error bar of only 0.7%.« less

  10. Experimental quantum annealing: case study involving the graph isomorphism problem.

    PubMed

    Zick, Kenneth M; Shehab, Omar; French, Matthew

    2015-06-08

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.

  11. Experimental quantum annealing: case study involving the graph isomorphism problem

    PubMed Central

    Zick, Kenneth M.; Shehab, Omar; French, Matthew

    2015-01-01

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973

  12. Structural redundancy of data from wastewater treatment systems. Determination of individual balance equations.

    PubMed

    Spindler, A

    2014-06-15

    Although data reconciliation is intensely applied in process engineering, almost none of its powerful methods are employed for validation of operational data from wastewater treatment plants. This is partly due to some prerequisites that are difficult to meet including steady state, known variances of process variables and absence of gross errors. However, an algorithm can be derived from the classical approaches to data reconciliation that allows to find a comprehensive set of equations describing redundancy in the data when measured and unmeasured variables (flows and concentrations) are defined. This is a precondition for methods of data validation based on individual mass balances such as CUSUM charts. The procedure can also be applied to verify the necessity of existing or additional measurements with respect to the improvement of the data's redundancy. Results are given for a large wastewater treatment plant. The introduction aims at establishing a link between methods known from data reconciliation in process engineering and their application in wastewater treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Error rates and resource overheads of encoded three-qubit gates

    NASA Astrophysics Data System (ADS)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  14. Optimal quantum error correcting codes from absolutely maximally entangled states

    NASA Astrophysics Data System (ADS)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  15. Performance of quantum annealing on random Ising problems implemented using the D-Wave Two

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Job, Joshua; Rønnow, Troels F.; Troyer, Matthias; Lidar, Daniel A.; USC Collaboration; ETH Collaboration

    2014-03-01

    Detecting a possible speedup of quantum annealing compared to classical algorithms is a pressing task in experimental adiabatic quantum computing. In this talk, we discuss the performance of the D-Wave Two quantum annealing device on Ising spin glass problems. The expected time to solution for the device to solve random instances with up to 503 spins and with specified coupling ranges is evaluated while carefully addressing the issue of statistical errors. We perform a systematic comparison of the expected time to solution between the D-Wave Two and classical stochastic solvers, specifically simulated annealing, and simulated quantum annealing based on quantum Monte Carlo, and discuss the question of speedup.

  16. On the connection between multigrid and cyclic reduction

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.

    1984-01-01

    A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.

  17. Functional, structural, and emotional correlates of impaired insight in cocaine addiction

    PubMed Central

    Moeller, Scott J.; Konova, Anna B.; Parvaz, Muhammad A.; Tomasi, Dardo; Lane, Richard D.; Fort, Carolyn; Goldstein, Rita Z.

    2014-01-01

    Context Individuals with cocaine use disorder (CUD) have difficulty monitoring ongoing behavior, possibly stemming from dysfunction of brain regions subserving insight and self-awareness [e.g., anterior cingulate cortex (ACC)]. Objective To test the hypothesis that CUD with impaired insight (iCUD) would show abnormal (A) ACC activity during error processing, assessed with functional magnetic resonance imaging during a classic inhibitory control task; (B) ACC gray matter integrity assessed with voxel-based morphometry; and (C) awareness of one’s own emotional experiences, assessed with the Levels of Emotional Awareness Scale (LEAS). Using a previously validated probabilistic choice task, we grouped 33 CUD according to insight [iCUD: N=15; unimpaired insight CUD: N=18]; we also studied 20 healthy controls, all with unimpaired insight. Design Multimodal imaging design. Setting Clinical Research Center at Brookhaven National Laboratory. Participants Thirty-three CUD and 20 healthy controls. Main Outcome Measure Functional magnetic resonance imaging, voxel-based morphometry, LEAS, and drug use variables. Results Compared with the other two study groups, iCUD showed lower (A) error-induced rostral ACC (rACC) activity as associated with more frequent cocaine use; (B) gray matter within the rACC; and (C) LEAS scores. Conclusions These results point to rACC functional and structural abnormalities, and diminished emotional awareness, in a subpopulation of CUD characterized by impaired insight. Because the rACC has been implicated in appraising the affective/motivational significance of errors and other types of self-referential processing, functional and structural abnormalities in this region could result in lessened concern (frequently ascribed to minimization and denial) about behavioral outcomes that could potentially culminate in increased drug use. Treatments targeting this CUD subgroup could focus on enhancing the salience of errors (e.g., lapses). PMID:24258223

  18. Evaluation of logistic regression models and effect of covariates for case-control study in RNA-Seq analysis.

    PubMed

    Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L

    2017-02-06

    Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.

  19. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  20. Prediction of true test scores from observed item scores and ancillary data.

    PubMed

    Haberman, Shelby J; Yao, Lili; Sinharay, Sandip

    2015-05-01

    In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability. © 2015 The British Psychological Society.

  1. Blessing of dimensionality: mathematical foundations of the statistical physics of data.

    PubMed

    Gorban, A N; Tyukin, I Y

    2018-04-28

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  2. Blessing of dimensionality: mathematical foundations of the statistical physics of data

    NASA Astrophysics Data System (ADS)

    Gorban, A. N.; Tyukin, I. Y.

    2018-04-01

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction. This article is part of the theme issue `Hilbert's sixth problem'.

  3. Photogrammetric discharge monitoring of small tropical mountain rivers - A case study at Rivière des Pluies, Réunion island

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Augereau, Emmanuel; Delacourt, Christophe; Bonnier, Julien

    2016-04-01

    Reliable discharge measurements are indispensable for an effective management of natural water resources and floods. Limitations of classical current meter profiling and stage-discharge ratings have stimulated the development of more accurate and efficient gauging techniques. While new discharge measurements technologies such as acoustic doppler current profilers and large-scale image particle velocimetry (LSPIV) have been developed and tested in numerous studies, the continuous monitoring of small mountain rivers and discharge dynamics during strong meteorological events remains challenging. More specifically LSPIV studies are often focused on short-term measurements during flood events and there are still very few studies that address its use for long-term monitoring of small mountain rivers. To fill this gap this study targets the development and testing of largely autonomous photogrammetric discharge measurement system with a special focus on the application to small mountain river with high discharge variability and a mobile riverbed in the tropics. It proposes several enhancements among previous LSPIV methods regarding camera calibration, more efficient processing in image geometry, the automatic detection of the water level as well as the statistical calibration and estimation of the discharge from multiple profiles. To account for changes in the bed topography the riverbed is surveyed repeatedly during the dry seasons using multi-view photogrammetry or terrestrial laser scanners. The presented case study comprises the analysis of several thousand videos spanning over two and a half year (2013-2015) to test the robustness and accuracy of different processing steps. An analysis of the obtained results suggests that the quality of the camera calibration reaches a sub-pixel accuracy. The median accuracy of the watermask detections is F1=0.82, whereas the precision is systematically higher than the recall. The resulting underestimation of the water surface area and level leads to a systematic underestimation of the discharge and error rates of up to 25 %. However, the bias can be effectively removed using a least-square cross-calibration which reduces the error to a MAE of 6.39% and a maximum error of 16.18%. Those error rates are significantly lower than the uncertainties among multiple profiles (30%) and illustrate the importance of the spatial averaging from multiple measurements. The study suggests that LSPIV can already be considered as a valuable tool for the monitoring of torrential flows, whereas further research is still needed to fully integrate night-time observation and stereo-photogrammetric capabilities.

  4. Trial-to-trial adaptation in control of arm reaching and standing posture

    PubMed Central

    Pienciak-Siewert, Alison; Horan, Dylan P.

    2016-01-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888

  5. Trial-to-trial adaptation in control of arm reaching and standing posture.

    PubMed

    Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A

    2016-12-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.

  6. Attenuation in invasive blood pressure measurement systems.

    PubMed

    Ercole, A

    2006-05-01

    Poor fidelity invasive arterial blood pressure (IABP) traces are a frequent practical problem. It is common practice to describe any such trace as being 'damped'; the resonance behaviour of IABP measurement systems having been extensively described in the literature. However, as poor quality arterial blood pressure signals are seen even with optimal pressure transduction circuits, this cannot be the sole mechanism. In this commentary the classical lumped-parameter Windkessel model is extended by postulating an additional impedance proximal to the site of IABP measurement. This impedance represents any mechanical obstruction to laminar flow. Equations are presented relating measured and actual arterial blood pressures in terms of the model impedances. The reactive properties of such a partial obstruction may lead to an IABP trace that is superficially similar in appearance to the case of an over-damped measurement system. However, this phenomenon should be termed 'attenuation' rather than 'damping' and is probably more common. The distinction is of practical importance as the behaviour of the measured systolic and diastolic pressures is different -- both are systematically underestimated and the mean arterial pressure is thus not preserved. Furthermore, this error varies inversely with the peripheral vascular resistance of the tissues distal to the measurement point, therefore apparently magnifying the effect of vasodilatation on blood pressure or derived quantities.

  7. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  8. Fiber-optical method of pyrometric measurement of melts temperature

    NASA Astrophysics Data System (ADS)

    Zakharenko, V. A.; Veprikova, Ya R.

    2018-01-01

    There is a scientific problem of non-contact measurement of the temperature of metal melts now. The problem is related to the need to achieve the specified measurement errors in conditions of uncertainty of the blackness coefficients of the radiating surfaces. The aim of this work is to substantiate the new method of measurement in which the influence of the blackness coefficient is eliminated. The task consisted in calculating the design and material of special crucible placed in the molten metal, which is an emitter in the form of blackbody (BB). The methods are based on the classical concepts of thermal radiation and calculations based on the Planck function. To solve the problem, the geometry of the crucible was calculated on the basis of the Goofy method which forms the emitter of a blackbody at the immersed in the melt. The paper describes the pyrometric device based on fiber optic pyrometer for temperature measurement of melts, which implements the proposed method of measurement using a special crucible. The emitter is formed by the melt in this crucible, the temperature within which is measured by means of fiber optic pyrometer. Based on the results of experimental studies, the radiation coefficient ε‧ > 0.999, which confirms the theoretical and computational justification is given in the article

  9. Quantum computer games: quantum minesweeper

    NASA Astrophysics Data System (ADS)

    Gordon, Michal; Gordon, Goren

    2010-07-01

    The computer game of quantum minesweeper is introduced as a quantum extension of the well-known classical minesweeper. Its main objective is to teach the unique concepts of quantum mechanics in a fun way. Quantum minesweeper demonstrates the effects of superposition, entanglement and their non-local characteristics. While in the classical minesweeper the goal of the game is to discover all the mines laid out on a board without triggering them, in the quantum version there are several classical boards in superposition. The goal is to know the exact quantum state, i.e. the precise layout of all the mines in all the superposed classical boards. The player can perform three types of measurement: a classical measurement that probabilistically collapses the superposition; a quantum interaction-free measurement that can detect a mine without triggering it; and an entanglement measurement that provides non-local information. The application of the concepts taught by quantum minesweeper to one-way quantum computing are also presented.

  10. Evaluation of Terrestrial Laser Scanner Accuracy in the Control of Hydrotechnical Structures

    NASA Astrophysics Data System (ADS)

    Muszyński, Zbigniew; Rybak, Jarosław

    2017-12-01

    In many cases of monitoring or load testing of hydrotechnical structures, the measurement results obtained from dial gauges may be affected by random or systematic errors resulting from the instability of the reference beam. For example, the measurement of wall displacement or pile settlement may be increased (or decreased) by displacements of the reference beam due to ground movement. The application of surveying methods such as high-precision levelling, motorized tacheometry or even terrestrial laser scanning makes it possible to provide an independent reference measurement free from systematic errors. It is very important in the case of walls and piles embedded in the rivers, where the construction of reference structure is even more difficult than usually. Construction of an independent reference system is also complicated when horizontal testing of sheet piles or diaphragm walls are considered. In this case, any underestimation of the horizontal displacement of an anchored or strutted construction leads to an understated value of the strut's load. These measurements are even more important during modernization works and repairs of the hydrotechnical structures. The purpose of this paper is to discuss the possibilities of using modern measurement methods for monitoring of horizontal displacements of an excavation wall. The methods under scrutiny (motorized tacheometry and terrestrial laser scanning) have been compared to classical techniques and described in the context of their practical use on the example hydrotechnical structure. This structure was a temporary cofferdam made from sheet pile wall. The research continuously conducted at Wroclaw University of Science and Technology made it possible to collect and summarize measurement results and practical experience. This paper identifies advantages and disadvantages of both analysed methods and presents a comparison of obtained measurement results of horizontal displacements. In conclusion, some recommendations have been formulated, which are relevant from the point of view of engineering practice.

  11. Research on aspheric focusing lens processing and testing technology in the high-energy laser test system

    NASA Astrophysics Data System (ADS)

    Liu, Dan; Fu, Xiu-hua; Jia, Zong-he; Wang, Zhe; Dong, Huan

    2014-08-01

    In the high-energy laser test system, surface profile and finish of the optical element are put forward higher request. Taking a focusing aspherical zerodur lens with a diameter of 100mm as example, using CNC and classical machining method of combining surface profile and surface quality of the lens were investigated. Taking profilometer and high power microscope measurement results as a guide, by testing and simulation analysis, process parameters were improved constantly in the process of manufacturing. Mid and high frequency error were trimmed and improved so that the surface form gradually converged to the required accuracy. The experimental results show that the final accuracy of the surface is less than 0.5μm and the surface finish is □, which fulfils the accuracy requirement of aspherical focusing lens in optical system.

  12. Experimental design and statistical methods for improved hit detection in high-throughput screening.

    PubMed

    Malo, Nathalie; Hanley, James A; Carlile, Graeme; Liu, Jing; Pelletier, Jerry; Thomas, David; Nadon, Robert

    2010-09-01

    Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.

  13. Theory and applications survey of decentralized control methods

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    A nonmathematical overview is presented of trends in the general area of decentralized control strategies which are suitable for hierarchical systems. Advances in decentralized system theory are closely related to advances in the so-called stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools pertaining to the classical stochastic control problem are outlined. Particular attention is devoted to pitfalls in the mathematical problem formulation for decentralized control. Major conclusions are that any purely deterministic approach to multilevel hierarchical dynamic systems is unlikely to lead to realistic theories or designs, that the flow of measurements and decisions in a decentralized system should not be instantaneous and error-free, and that delays in information exchange in a decentralized system lead to reasonable approaches to decentralized control. A mathematically precise notion of aggregating information is not yet available.

  14. The use of genetic programming to develop a predictor of swash excursion on sandy beaches

    NASA Astrophysics Data System (ADS)

    Passarella, Marinella; Goldstein, Evan B.; De Muro, Sandro; Coco, Giovanni

    2018-02-01

    We use genetic programming (GP), a type of machine learning (ML) approach, to predict the total and infragravity swash excursion using previously published data sets that have been used extensively in swash prediction studies. Three previously published works with a range of new conditions are added to this data set to extend the range of measured swash conditions. Using this newly compiled data set we demonstrate that a ML approach can reduce the prediction errors compared to well-established parameterizations and therefore it may improve coastal hazards assessment (e.g. coastal inundation). Predictors obtained using GP can also be physically sound and replicate the functionality and dependencies of previous published formulas. Overall, we show that ML techniques are capable of both improving predictability (compared to classical regression approaches) and providing physical insight into coastal processes.

  15. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  16. Signatures of bifurcation on quantum correlations: Case of the quantum kicked top

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.; Santhanam, M. S.

    2017-01-01

    Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.

  17. Finite element modelling versus classic beam theory: comparing methods for stress estimation in a morphologically diverse sample of vertebrate long bones

    PubMed Central

    Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.

    2013-01-01

    Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199

  18. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  19. Buying a Better Air Force

    DTIC Science & Technology

    2006-03-01

    identify if an explanatory variable may have been omitted due to model misspecification ( Ramsey , 1979). The RESET test resulted in failure to...Prob > F 0.0094 This model was also regressed using Huber-White estimators. Again, the Ramsey RESET test was done to ensure relevant...Aircraft. Annapolis, MD: Naval Institute Press, 2004. Ramsey , J. B. “ Tests for Specification Errors in Classical Least-Squares Regression Analysis

  20. An Investigation of the Accuracy of Alternative Methods of True Score Estimation in High-Stakes Mixed-Format Examinations.

    ERIC Educational Resources Information Center

    Klinger, Don A.; Rogers, W. Todd

    2003-01-01

    The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…

  1. Target Classification of Canonical Scatterers Using Classical Estimation and Dictionary Based Techniques

    DTIC Science & Technology

    2012-03-22

    shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries

  2. Determination of thiamine HCl and pyridoxine HCl in pharmaceutical preparations using UV-visible spectrophotometry and genetic algorithm based multivariate calibration methods.

    PubMed

    Ozdemir, Durmus; Dinc, Erdal

    2004-07-01

    Simultaneous determination of binary mixtures pyridoxine hydrochloride and thiamine hydrochloride in a vitamin combination using UV-visible spectrophotometry and classical least squares (CLS) and three newly developed genetic algorithm (GA) based multivariate calibration methods was demonstrated. The three genetic multivariate calibration methods are Genetic Classical Least Squares (GCLS), Genetic Inverse Least Squares (GILS) and Genetic Regression (GR). The sample data set contains the UV-visible spectra of 30 synthetic mixtures (8 to 40 microg/ml) of these vitamins and 10 tablets containing 250 mg from each vitamin. The spectra cover the range from 200 to 330 nm in 0.1 nm intervals. Several calibration models were built with the four methods for the two components. Overall, the standard error of calibration (SEC) and the standard error of prediction (SEP) for the synthetic data were in the range of <0.01 and 0.43 microg/ml for all the four methods. The SEP values for the tablets were in the range of 2.91 and 11.51 mg/tablets. A comparison of genetic algorithm selected wavelengths for each component using GR method was also included.

  3. Dynamic Simulation of Human Gait Model With Predictive Capability.

    PubMed

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  4. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  5. Synchronizing stochastic circadian oscillators in single cells of Neurospora crassa

    NASA Astrophysics Data System (ADS)

    Deng, Zhaojie; Arsenault, Sam; Caranica, Cristian; Griffith, James; Zhu, Taotao; Al-Omari, Ahmad; Schüttler, Heinz-Bernd; Arnold, Jonathan; Mao, Leidong

    2016-10-01

    The synchronization of stochastic coupled oscillators is a central problem in physics and an emerging problem in biology, particularly in the context of circadian rhythms. Most measurements on the biological clock are made at the macroscopic level of millions of cells. Here measurements are made on the oscillators in single cells of the model fungal system, Neurospora crassa, with droplet microfluidics and the use of a fluorescent recorder hooked up to a promoter on a clock controlled gene-2 (ccg-2). The oscillators of individual cells are stochastic with a period near 21 hours (h), and using a stochastic clock network ensemble fitted by Markov Chain Monte Carlo implemented on general-purpose graphical processing units (or GPGPUs) we estimated that >94% of the variation in ccg-2 expression was stochastic (as opposed to experimental error). To overcome this stochasticity at the macroscopic level, cells must synchronize their oscillators. Using a classic measure of similarity in cell trajectories within droplets, the intraclass correlation (ICC), the synchronization surface ICC is measured on >25,000 cells as a function of the number of neighboring cells within a droplet and of time. The synchronization surface provides evidence that cells communicate, and synchronization varies with genotype.

  6. Cratering in glasses impacted by debris or micrometeorites

    NASA Technical Reports Server (NTRS)

    Wiedlocher, David E.; Kinser, Donald L.

    1993-01-01

    Mechanical strength measurements on five glasses and one glass-ceramic exposed on LDEF revealed no damage exceeding experimental limits of error. The measurement technique subjected less than 5 percent of the sample surface area to stresses above 90 percent of the failure strength. Seven micrometeorite or space debris impacts occurred at locations which were not in that portion of the sample subjected to greater than 90 percent of the applied stress. As a result of this, the impact events on the sample were not detected in the mechanical strength measurements. The physical form and structure of the impact sites was carefully examined to determine the influence of those events upon stress concentration associated with the impact and the resulting mechanical strength. The size of the impact site, insofar as it determines flaw size for fracture purposes, was examined. Surface topography of the impacts reveals that six of the seven sites display impact melting. The classical melt crater structure is surrounded by a zone of fractured glass. Residual stresses arising from shock compression and from cooling of the fused zone cannot be included in the fracture mechanics analyses based on simple flaw size measurements. Strategies for refining estimates of mechanical strength degradation by impact events are presented.

  7. Synchronizing stochastic circadian oscillators in single cells of Neurospora crassa

    PubMed Central

    Deng, Zhaojie; Arsenault, Sam; Caranica, Cristian; Griffith, James; Zhu, Taotao; Al-Omari, Ahmad; Schüttler, Heinz-Bernd; Arnold, Jonathan; Mao, Leidong

    2016-01-01

    The synchronization of stochastic coupled oscillators is a central problem in physics and an emerging problem in biology, particularly in the context of circadian rhythms. Most measurements on the biological clock are made at the macroscopic level of millions of cells. Here measurements are made on the oscillators in single cells of the model fungal system, Neurospora crassa, with droplet microfluidics and the use of a fluorescent recorder hooked up to a promoter on a clock controlled gene-2 (ccg-2). The oscillators of individual cells are stochastic with a period near 21 hours (h), and using a stochastic clock network ensemble fitted by Markov Chain Monte Carlo implemented on general-purpose graphical processing units (or GPGPUs) we estimated that >94% of the variation in ccg-2 expression was stochastic (as opposed to experimental error). To overcome this stochasticity at the macroscopic level, cells must synchronize their oscillators. Using a classic measure of similarity in cell trajectories within droplets, the intraclass correlation (ICC), the synchronization surface ICC is measured on >25,000 cells as a function of the number of neighboring cells within a droplet and of time. The synchronization surface provides evidence that cells communicate, and synchronization varies with genotype. PMID:27786253

  8. Corneal topometry by fringe projection: limits and possibilities

    NASA Astrophysics Data System (ADS)

    Windecker, Robert; Tiziani, Hans J.; Thiel, H.; Jean, Benedikt J.

    1996-01-01

    A fast and accurate measurement of corneal topography is an important task especially since laser induced corneal reshaping has been used for the correction of ametropia. The classical measuring system uses Placido rings for the measurement and calculation of the topography or local curvatures. Another approach is the projection of a known fringe map to be imaged onto the surface under a certain angle of incidence. We present a set-up using telecentric illumination and detection units. With a special grating we get a synthetic wavelength with a nearly sinusoidal profile. In combination with a very fast data acquisition the topography can be evaluated using as special selfnormalizing phase evaluation algorithm. It calculates local Fourier coefficients and corrects errors caused by imperfect illumination or inhomogeneous scattering by fringe normalization. The topography can be determined over 700 by 256 pixel. The set-up is suitable to measure optically rough silicon replica of the human cornea as well as the cornea in vivo over a field of 8 mm and more. The resolution is mainly limited by noise and is better than two micrometers. We discuss the principle benefits and the drawbacks compared with standard Placido technique.

  9. Quantum to Classical Transitions via Weak Measurements and Post-Selection

    NASA Astrophysics Data System (ADS)

    Cohen, Eliahu; Aharonov, Yakir

    Alongside its immense empirical success, the quantum mechanical account of physical systems imposes a myriad of divergences from our thoroughly ingrained classical ways of thinking. These divergences, while striking, would have been acceptable if only a continuous transition to the classical domain was at hand. Strangely, this is not quite the case. The difficulties involved in reconciling the quantum with the classical have given rise to different interpretations, each with its own shortcomings. Traditionally, the two domains are sewed together by invoking an ad hoc theory of measurement, which has been incorporated in the axiomatic foundations of quantum theory. This work will incorporate a few related tools for addressing the above conceptual difficulties: deterministic operators, weak measurements, and post-selection. Weak Measurement, based on a very weak von Neumann coupling, is a unique kind of quantum measurement with numerous theoretical and practical applications. In contrast to other measurement techniques, it allows to gather a small amount of information regarding the quantum system, with only a negligible probability of collapsing it onto an eigenstate of the measured observable. A single weak measurement yieldsan almost random outcome, but when performed repeatedly over a large ensemble, the averaged outcome becomes increasingly robust and accurate. Importantly, a long sequence of weak measurements can be thought of as a single projective measurement. We claim in this work that classical variables appearing in the o-world, such as center of mass, moment of inertia, pressure, and average forces, result from a multitude of quantum weak measurements performed in the micro-world. Here again, the quantum outcomes are highly uncertain, but the law of large numbers obliges their convergence to the definite quantities we know from our everyday lives. By augmenting this description with a final boundary condition and employing the notion of "classical robustness under time-reversal", we will draw a quantitative borderline between the classical and quantum regimes. We will conclude by analyzing the role of oscopic systems in amplifying and recording quantum outcomes.

  10. REEXAMINATION OF INDUCTION HEATING OF PRIMITIVE BODIES IN PROTOPLANETARY DISKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menzel, Raymond L.; Roberge, Wayne G., E-mail: menzer@rpi.edu, E-mail: roberw@rpi.edu

    2013-10-20

    We reexamine the unipolar induction mechanism for heating asteroids originally proposed in a classic series of papers by Sonett and collaborators. As originally conceived, induction heating is caused by the 'motional electric field' that appears in the frame of an asteroid immersed in a fully ionized, magnetized solar wind and drives currents through its interior. However, we point out that classical induction heating contains a subtle conceptual error, in consequence of which the electric field inside the asteroid was calculated incorrectly. The problem is that the motional electric field used by Sonett et al. is the electric field in themore » freely streaming plasma far from the asteroid; in fact, the motional field vanishes at the asteroid surface for realistic assumptions about the plasma density. In this paper we revisit and improve the induction heating scenario by (1) correcting the conceptual error by self-consistently calculating the electric field in and around the boundary layer at the asteroid-plasma interface; (2) considering weakly ionized plasmas consistent with current ideas about protoplanetary disks; and (3) considering more realistic scenarios that do not require a fully ionized, powerful T Tauri wind in the disk midplane. We present exemplary solutions for two highly idealized flows that show that the interior electric field can either vanish or be comparable to the fields predicted by classical induction depending on the flow geometry. We term the heating driven by these flows 'electrodynamic heating', calculate its upper limits, and compare them to heating produced by short-lived radionuclides.« less

  11. A robust interpolation method for constructing digital elevation models from remote sensing data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Liu, Fengying; Li, Yanyan; Yan, Changqing; Liu, Guolin

    2016-09-01

    A digital elevation model (DEM) derived from remote sensing data often suffers from outliers due to various reasons such as the physical limitation of sensors and low contrast of terrain textures. In order to reduce the effect of outliers on DEM construction, a robust algorithm of multiquadric (MQ) methodology based on M-estimators (MQ-M) was proposed. MQ-M adopts an adaptive weight function with three-parts. The weight function is null for large errors, one for small errors and quadric for others. A mathematical surface was employed to comparatively analyze the robustness of MQ-M, and its performance was compared with those of the classical MQ and a recently developed robust MQ method based on least absolute deviation (MQ-L). Numerical tests show that MQ-M is comparative to the classical MQ and superior to MQ-L when sample points follow normal and Laplace distributions, and under the presence of outliers the former is more accurate than the latter. A real-world example of DEM construction using stereo images indicates that compared with the classical interpolation methods, such as natural neighbor (NN), ordinary kriging (OK), ANUDEM, MQ-L and MQ, MQ-M has a better ability of preserving subtle terrain features. MQ-M replaces thin plate spline for reference DEM construction to assess the contribution to our recently developed multiresolution hierarchical classification method (MHC). Classifying the 15 groups of benchmark datasets provided by the ISPRS Commission demonstrates that MQ-M-based MHC is more accurate than MQ-L-based and TPS-based MHCs. MQ-M has high potential for DEM construction.

  12. The Surveillance Error Grid

    PubMed Central

    Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris

    2014-01-01

    Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886

  13. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.

  14. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  15. Bell’s measure and implementing quantum Fourier transform with orbital angular momentum of classical light

    PubMed Central

    Song, Xinbing; Sun, Yifan; Li, Pengyun; Qin, Hongwei; Zhang, Xiangdong

    2015-01-01

    We perform Bell’s measurement for the non-separable correlation between polarization and orbital angular momentum from the same classical vortex beam. The violation of Bell’s inequality for such a non-separable classical correlation has been demonstrated experimentally. Based on the classical vortex beam and non-quantum entanglement between the polarization and the orbital angular momentum, the Hadamard gates and conditional phase gates have been designed. Furthermore, a quantum Fourier transform has been implemented experimentally. PMID:26369424

  16. Investigating the Macrodispersion Experiment (MADE) site in Columbus, Mississippi, using a three‐dimensional inverse flow and transport model

    USGS Publications Warehouse

    Christiansen Barlebo , Heidi; Hill, Mary C.; Rosbjerg, Dan

    2004-01-01

    Flowmeter‐measured hydraulic conductivities from the heterogeneous MADE site have been used predictively in advection‐dispersion models. Resulting simulated concentrations failed to reproduce even major plume characteristics and some have concluded that other mechanisms, such as dual porosity, are important. Here an alternative possibility is investigated: that the small‐scale flowmeter measurements are too noisy and possibly too biased to use so directly in site‐scale models and that the hydraulic head and transport data are more suitable for site‐scale characterization. Using a calibrated finite element model of the site and a new framework to evaluate random and systematic model and measurement errors, the following conclusions are derived. (1) If variations in subsurface fluid velocities like those simulated in this work (0.1 and 2.0 m per day along parallel and reasonably close flow paths) exist, it is likely that classical advection‐dispersion processes can explain the measured plume characteristics. (2) The flowmeter measurements are possibly systematically lower than site‐scale values when the measurements are considered individually and using common averaging methods and display variability that obscures abrupt changes in hydraulic conductivities that are well supported by changes in hydraulic gradients and are important to the simulation of transport.

  17. Assessment of the derivative-moment transformation method for unsteady-load estimation

    NASA Astrophysics Data System (ADS)

    Mohebbian, Ali; Rival, David E.

    2012-08-01

    It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces. However, measuring the acceleration term within the volume of interest using particle image velocimetry (PIV) can be limited by optical access, reflections, as well as shadows. Therefore, in this study, an alternative approach, termed the derivative-moment transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency, which is more dominant for the direction of drag, was found to be the determination of pressure and unsteady terms in the wake. The effect of control-volume size was investigated, suggesting that larger domains work best by minimizing the associated error in the determination of the pressure field. When decreasing the control-volume size, wake vortices, which produce high gradients across the control surfaces, are found to substantially increase the level of error. On the other hand, it was shown that for large control volumes, and with realistic spatial resolution, the accuracy of the DMT method would also suffer. Therefore, a delicate compromise is required when selecting control-volume size in future experiments.

  18. Position-based coding and convex splitting for private communication over quantum channels

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.

    2017-10-01

    The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.

  19. Polar codes for achieving the classical capacity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Wilde, Mark

    2012-02-01

    We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.

  20. Dynamic action units slip in speech production errors ☆

    PubMed Central

    Goldstein, Louis; Pouplier, Marianne; Chen, Larissa; Saltzman, Elliot; Byrd, Dani

    2008-01-01

    In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors – “slips of the tongue”. The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units – gestures – in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action. PMID:16822494

  1. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE PAGES

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; ...

    2018-02-12

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  2. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    NASA Astrophysics Data System (ADS)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.

    2018-02-01

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  3. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  4. A family of chaotic pure analog coding schemes based on baker's map function

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun

    2015-12-01

    This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

  5. Entanglement-enhanced Neyman-Pearson target detection using quantum illumination

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.

  6. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  7. A System Computational Model of Implicit Emotional Learning

    PubMed Central

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898

  8. A System Computational Model of Implicit Emotional Learning.

    PubMed

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation.

  9. Spatially multiplexed orbital-angular-momentum-encoded single photon and classical channels in a free-space optical communication link.

    PubMed

    Ren, Yongxiong; Liu, Cong; Pang, Kai; Zhao, Jiapeng; Cao, Yinwen; Xie, Guodong; Li, Long; Liao, Peicheng; Zhao, Zhe; Tur, Moshe; Boyd, Robert W; Willner, Alan E

    2017-12-01

    We experimentally demonstrate spatial multiplexing of an orbital angular momentum (OAM)-encoded quantum channel and a classical Gaussian beam with a different wavelength and orthogonal polarization. Data rates as large as 100 MHz are achieved by encoding on two different OAM states by employing a combination of independently modulated laser diodes and helical phase holograms. The influence of OAM mode spacing, encoding bandwidth, and interference from the co-propagating Gaussian beam on registered photon count rates and quantum bit error rates is investigated. Our results show that the deleterious effects of intermodal crosstalk effects on system performance become less important for OAM mode spacing Δ≥2 (corresponding to a crosstalk value of less than -18.5  dB). The use of OAM domain can additionally offer at least 10.4 dB isolation besides that provided by wavelength and polarization, leading to a further suppression of interference from the classical channel.

  10. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less

  11. Mismeasurement and the resonance of strong confounders: correlated errors.

    PubMed

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  12. The dorsal stream contribution to phonological retrieval in object naming

    PubMed Central

    Faseyitan, Olufunsho; Kim, Junghoon; Coslett, H. Branch

    2012-01-01

    Meaningful speech, as exemplified in object naming, calls on knowledge of the mappings between word meanings and phonological forms. Phonological errors in naming (e.g. GHOST named as ‘goath’) are commonly seen in persisting post-stroke aphasia and are thought to signal impairment in retrieval of phonological form information. We performed a voxel-based lesion-symptom mapping analysis of 1718 phonological naming errors collected from 106 individuals with diverse profiles of aphasia. Voxels in which lesion status correlated with phonological error rates localized to dorsal stream areas, in keeping with classical and contemporary brain-language models. Within the dorsal stream, the critical voxels were concentrated in premotor cortex, pre- and postcentral gyri and supramarginal gyrus with minimal extension into auditory-related posterior temporal and temporo-parietal cortices. This challenges the popular notion that error-free phonological retrieval requires guidance from sensory traces stored in posterior auditory regions and points instead to sensory-motor processes located further anterior in the dorsal stream. In a separate analysis, we compared the lesion maps for phonological and semantic errors and determined that there was no spatial overlap, demonstrating that the brain segregates phonological and semantic retrieval operations in word production. PMID:23171662

  13. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits

    PubMed Central

    Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  14. Symbolic-numeric interface: A review

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1980-01-01

    A survey of the use of a combination of symbolic and numerical calculations is presented. Symbolic calculations primarily refer to the computer processing of procedures from classical algebra, analysis, and calculus. Numerical calculations refer to both numerical mathematics research and scientific computation. This survey is intended to point out a large number of problem areas where a cooperation of symbolic and numerical methods is likely to bear many fruits. These areas include such classical operations as differentiation and integration, such diverse activities as function approximations and qualitative analysis, and such contemporary topics as finite element calculations and computation complexity. It is contended that other less obvious topics such as the fast Fourier transform, linear algebra, nonlinear analysis and error analysis would also benefit from a synergistic approach.

  15. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  16. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  17. Asymptotic state discrimination and a strict hierarchy in distinguishability norms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitambar, Eric; Hsieh, Min-Hsiu

    2014-11-15

    In this paper, we consider the problem of discriminating quantum states by local operations and classical communication (LOCC) when an arbitrarily small amount of error is permitted. This paradigm is known as asymptotic state discrimination, and we derive necessary conditions for when two multipartite states of any size can be discriminated perfectly by asymptotic LOCC. We use this new criterion to prove a gap in the LOCC and separable distinguishability norms. We then turn to the operational advantage of using two-way classical communication over one-way communication in LOCC processing. With a simple two-qubit product state ensemble, we demonstrate a strictmore » majorization of the two-way LOCC norm over the one-way norm.« less

  18. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    NASA Astrophysics Data System (ADS)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  19. Design and evaluation of a robust dynamic neurocontroller for a multivariable aircraft control problem

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Garg, S.; Merrill, W.

    1992-01-01

    The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.

  20. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  1. Proliferation of Observables and Measurement in Quantum-Classical Hybrids

    NASA Astrophysics Data System (ADS)

    Elze, Hans-Thomas

    2012-01-01

    Following a review of quantum-classical hybrid dynamics, we discuss the ensuing proliferation of observables and relate it to measurements of (would-be) quantum mechanical degrees of freedom performed by (would-be) classical ones (if they were separable). Hybrids consist in coupled classical (CL) and quantum mechanical (QM) objects. Numerous consistency requirements for their description have been discussed and are fulfilled here. We summarize a representation of quantum mechanics in terms of classical analytical mechanics which is naturally extended to QM-CL hybrids. This framework allows for superposition, separable, and entangled states originating in the QM sector, admits experimenter's "Free Will", and is local and nonsignaling. Presently, we study the set of hybrid observables, which is larger than the Cartesian product of QM and CL observables of its components; yet it is smaller than a corresponding product of all-classical observables. Thus, quantumness and classicality infect each other.

  2. Classical Galactosaemia in Ireland: incidence, complications and outcomes of treatment.

    PubMed

    Coss, K P; Doran, P P; Owoeye, C; Codd, M B; Hamid, N; Mayne, P D; Crushell, E; Knerr, I; Monavari, A A; Treacy, E P

    2013-01-01

    Newborn screening for the inborn error of metabolism, classical galactosaemia prevents life-threatening complications in the neonatal period. It does not however influence the development of long-term complications and the complex pathophysiology of this rare disease remains poorly understood. The objective of this study was to report the development of a healthcare database (using Distiller Version 2.1) to review the epidemiology of classical galactosaemia in Ireland since initiation of newborn screening in 1972 and the long-term clinical outcomes of all patients attending the National Centre for Inherited Metabolic Disorders (NCIMD). Since 1982, the average live birth incidence rate of classical galactosaemia in the total Irish population was approximately 1:16,476 births. This reflects a high incidence in the Irish 'Traveller' population, with an estimated birth incidence of 1:33,917 in the non-Traveller Irish population. Despite early initiation of treatment (dietary galactose restriction), the long-term outcomes of classical galactosaemia in the Irish patient population are poor; 30.6 % of patients ≥ 6 yrs have IQs <70, 49.6 % of patients ≥ 2.5 yrs have speech or language impairments and 91.2 % of females ≥ 13 yrs suffer from hypergonadotrophic hypogonadism (HH) possibly leading to decreased fertility. These findings are consistent with the international experience. This emphasizes the requirement for continued clinical research in this complex disorder.

  3. Massive metrology using fast e-beam technology improves OPC model accuracy by >2x at faster turnaround time

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei

    2018-03-01

    Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.

  4. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Learning, Realizability and Games in Classical Arithmetic

    NASA Astrophysics Data System (ADS)

    Aschieri, Federico

    2010-12-01

    In this dissertation we provide mathematical evidence that the concept of learning can be used to give a new and intuitive computational semantics of classical proofs in various fragments of Predicative Arithmetic. First, we extend Kreisel modified realizability to a classical fragment of first order Arithmetic, Heyting Arithmetic plus EM1 (Excluded middle axiom restricted to Sigma^0_1 formulas). We introduce a new realizability semantics we call "Interactive Learning-Based Realizability". Our realizers are self-correcting programs, which learn from their errors and evolve through time. Secondly, we extend the class of learning based realizers to a classical version PCFclass of PCF and, then, compare the resulting notion of realizability with Coquand game semantics and prove a full soundness and completeness result. In particular, we show there is a one-to-one correspondence between realizers and recursive winning strategies in the 1-Backtracking version of Tarski games. Third, we provide a complete and fully detailed constructive analysis of learning as it arises in learning based realizability for HA+EM1, Avigad's update procedures and epsilon substitution method for Peano Arithmetic PA. We present new constructive techniques to bound the length of learning processes and we apply them to reprove - by means of our theory - the classic result of Godel that provably total functions of PA can be represented in Godel's system T. Last, we give an axiomatization of the kind of learning that is needed to computationally interpret Predicative classical second order Arithmetic. Our work is an extension of Avigad's and generalizes the concept of update procedure to the transfinite case. Transfinite update procedures have to learn values of transfinite sequences of non computable functions in order to extract witnesses from classical proofs.

  6. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  7. Quantum steganography and quantum error-correction

    NASA Astrophysics Data System (ADS)

    Shaw, Bilal A.

    Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.

  8. The Effects of Classic and Web-Designed Conceptual Change Texts on the Subject of Water Chemistry

    ERIC Educational Resources Information Center

    Tas, Erol; Gülen, Salih; Öner, Zeynep; Özyürek, Cengiz

    2015-01-01

    The purpose of this study is to research the effects of traditional and web-assisted conceptual change texts for the subject of water chemistry on the success, conceptual errors and permanent learning of students. A total of 37 8th graders in a secondary school of Samsun participated in this study which had a random experimental design with…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.

    The classical Nadaraya-Watson estimator is shown to solve a generic sensor fusion problem where the underlying sensor error densities are not known but a sample is available. By employing Haar kernels this estimator is shown to yield finite sample guarantees and also to be efficiently computable. Two simulation examples, and a robotics example involving the detection of a door using arrays of ultrasonic and infrared sensors, are presented to illustrate the performance.

  10. Quantum cryptography protocols robust against photon number splitting attacks for weak laser pulse implementations.

    PubMed

    Scarani, Valerio; Acín, Antonio; Ribordy, Grégoire; Gisin, Nicolas

    2004-02-06

    We introduce a new class of quantum key distribution protocols, tailored to be robust against photon number splitting (PNS) attacks. We study one of these protocols, which differs from the original protocol by Bennett and Brassard (BB84) only in the classical sifting procedure. This protocol is provably better than BB84 against PNS attacks at zero error.

  11. An Examination in Turkey: Error Analysis of Mathematics Students on Group Theory

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Ozkan, Ayten; Ozkan, E. Mehmet

    2015-01-01

    The aim of this study is to analyze the mistakes that have been made in the group theory underlying the algebra mathematics. The 100 students taking algebra math 1 class and studying at the 2nd grade at a state university in Istanbul participated in this study. The related findings were prepared as a classical exam of 6 questions which have been…

  12. Publisher's Note: System of classical nonlinear oscillators as a coarse-grained quantum system [Phys. Rev. A 84, 022103 (2011)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radonjic, Milan; Prvanovic, Slobodan; Buric, Nikola

    2011-08-15

    This paper was published online on 2 August 2011 with a typographical error in an author name in the author list. The first author's name should be 'Milan Radonji Acute-Accent c'. The name has been corrected as of 16 August 2011. The name is correct in the printed version of the journal.

  13. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  14. Precise signal amplitude retrieval for a non-homogeneous diagnostic beam using complex interferometry approach

    NASA Astrophysics Data System (ADS)

    Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.

    2017-08-01

    Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.

  15. Error measuring system of rotary Inductosyn

    NASA Astrophysics Data System (ADS)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  16. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  17. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  18. Chinese Digital Zenith Telescope (DZT) used for Astro-geodetic Deflection of the Vertical Determination

    NASA Astrophysics Data System (ADS)

    Tian, L.; Wang, B.; Wang, Z.; Yin, Z.; Hu, H.; Wang, H.; Han, Y.

    2015-12-01

    Classical optical astrometry can be used to measure and study variations of plumb line. For the earth gravity filed related researches, it is irreplaceable by technologies like GNSS、VLBI、SLR, etc. However, classical astrometric instruments have some major drawback, such as low efficiency, low automation, more operating observers, and individual error in some visual instruments. In 2011, The National Astronomical Observatories of the Chinese Academy of Sciences (NAOC) successfully developed the new digital zenith telescope prototype (DZT-1), which has the ability of highly automatic observation and data processing, even allowing unattended observation by remote control. By utilizing CCD camera as imaging terminal and high-accuracy tiltmeter to replace mercurial plate, observation efficiency of DZT is improved greatly. According to the results of data obtained from test observations, single-observation accuracy of DZT-1 is 0.15-0.3″ and one night observation accuracy up to 0.07-0.08″, which is better than the observation accuracy of classical astrometric instruments. The observations of DZT can be used to obtain the plumb line variations and the vertical deflections, which can be used for carrying out seismic, geodetic and other related geo-scientific researches. Especially the collocated observations with gravimeters and the conjoint analysis of the observation data will be helpful to recognize the anomalous motion and variation of underground mass over time, and maybe provide significant information for estimating the scale of underground anomalous mass. The information is valuable for determining the three key factors of earthquake possibly. Moreover, the project team is carrying out the development of new DZT with better performance and studying the key techniques for new instrument to make DZT play a more significant role in the astronomy and geoscience fields.

  19. Non-orthogonal tool/flange and robot/world calibration.

    PubMed

    Ernst, Floris; Richter, Lars; Matthäus, Lars; Martens, Volker; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-12-01

    For many robot-assisted medical applications, it is necessary to accurately compute the relation between the robot's coordinate system and the coordinate system of a localisation or tracking device. Today, this is typically carried out using hand-eye calibration methods like those proposed by Tsai/Lenz or Daniilidis. We present a new method for simultaneous tool/flange and robot/world calibration by estimating a solution to the matrix equation AX = YB. It is computed using a least-squares approach. Because real robots and localisation are all afflicted by errors, our approach allows for non-orthogonal matrices, partially compensating for imperfect calibration of the robot or localisation device. We also introduce a new method where full robot/world and partial tool/flange calibration is possible by using localisation devices providing less than six degrees of freedom (DOFs). The methods are evaluated on simulation data and on real-world measurements from optical and magnetical tracking devices, volumetric ultrasound providing 3-DOF data, and a surface laser scanning device. We compare our methods with two classical approaches: the method by Tsai/Lenz and the method by Daniilidis. In all experiments, the new algorithms outperform the classical methods in terms of translational accuracy by up to 80% and perform similarly in terms of rotational accuracy. Additionally, the methods are shown to be stable: the number of calibration stations used has far less influence on calibration quality than for the classical methods. Our work shows that the new method can be used for estimating the relationship between the robot's and the localisation device's coordinate systems. The new method can also be used for deficient systems providing only 3-DOF data, and it can be employed in real-time scenarios because of its speed. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Error-compensation model for simultaneous measurement of five degrees of freedom motion errors of a rotary axis

    NASA Astrophysics Data System (ADS)

    Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin

    2018-07-01

    This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.

  1. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  2. Complementarity and Correlations

    NASA Astrophysics Data System (ADS)

    Maccone, Lorenzo; Bruß, Dagmar; Macchiavello, Chiara

    2015-04-01

    We provide an interpretation of entanglement based on classical correlations between measurement outcomes of complementary properties: States that have correlations beyond a certain threshold are entangled. The reverse is not true, however. We also show that, surprisingly, all separable nonclassical states exhibit smaller correlations for complementary observables than some strictly classical states. We use mutual information as a measure of classical correlations, but we conjecture that the first result holds also for other measures (e.g., the Pearson correlation coefficient or the sum of conditional probabilities).

  3. Differential quantitative proteomics of Porphyromonas gingivalis by linear ion trap mass spectrometry: non-label methods comparison, q-values and LOWESS curve fitting

    PubMed Central

    Xia, Qiangwei; Wang, Tiansong; Park, Yoonsuk; Lamont, Richard J.; Hackett, Murray

    2009-01-01

    Differential analysis of whole cell proteomes by mass spectrometry has largely been applied using various forms of stable isotope labeling. While metabolic stable isotope labeling has been the method of choice, it is often not possible to apply such an approach. Four different label free ways of calculating expression ratios in a classic “two-state” experiment are compared: signal intensity at the peptide level, signal intensity at the protein level, spectral counting at the peptide level, and spectral counting at the protein level. The quantitative data were mined from a dataset of 1245 qualitatively identified proteins, about 56% of the protein encoding open reading frames from Porphyromonas gingivalis, a Gram-negative intracellular pathogen being studied under extracellular and intracellular conditions. Two different control populations were compared against P. gingivalis internalized within a model human target cell line. The q-value statistic, a measure of false discovery rate previously applied to transcription microarrays, was applied to proteomics data. For spectral counting, the most logically consistent estimate of random error came from applying the locally weighted scatter plot smoothing procedure (LOWESS) to the most extreme ratios generated from a control technical replicate, thus setting upper and lower bounds for the region of experimentally observed random error. PMID:19337574

  4. Rapid parameterization of small molecules using the Force Field Toolkit.

    PubMed

    Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C

    2013-12-15

    The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.

  5. Forecasting the Water Demand in Chongqing, China Using a Grey Prediction Model and Recommendations for the Sustainable Development of Urban Water Consumption.

    PubMed

    Wu, Hua'an; Zeng, Bo; Zhou, Meng

    2017-11-15

    High accuracy in water demand predictions is an important basis for the rational allocation of city water resources and forms the basis for sustainable urban development. The shortage of water resources in Chongqing, the youngest central municipality in Southwest China, has significantly increased with the population growth and rapid economic development. In this paper, a new grey water-forecasting model (GWFM) was built based on the data characteristics of water consumption. The parameter estimation and error checking methods of the GWFM model were investigated. Then, the GWFM model was employed to simulate the water demands of Chongqing from 2009 to 2015 and forecast it in 2016. The simulation and prediction errors of the GWFM model was checked, and the results show the GWFM model exhibits better simulation and prediction precisions than those of the classical Grey Model with one variable and single order equation GM(1,1) for short and the frequently-used Discrete Grey Model with one variable and single order equation, DGM(1,1) for short. Finally, the water demand in Chongqing from 2017 to 2022 was forecasted, and some corresponding control measures and recommendations were provided based on the prediction results to ensure a viable water supply and promote the sustainable development of the Chongqing economy.

  6. Investigations of an Accelerometer-based Disturbance Feedforward Control for Vibration Suppression in Adaptive Optics of Large Telescopes

    NASA Astrophysics Data System (ADS)

    Glück, Martin; Pott, Jörg-Uwe; Sawodny, Oliver

    2017-06-01

    Adaptive Optics (AO) systems in large telescopes do not only correct atmospheric phase disturbances, but they also telescope structure vibrations induced by wind or telescope motions. Often the additional wavefront error due to mirror vibrations can dominate the disturbance power and contribute significantly to the total tip-tilt Zernike mode error budget. Presently, these vibrations are compensated for by common feedback control laws. However, when observing faint natural guide stars (NGS) at reduced control bandwidth, high-frequency vibrations (>5 Hz) cannot be fully compensated for by feedback control. In this paper, we present an additional accelerometer-based disturbance feedforward control (DFF), which is independent of the NGS wavefront sensor exposure time to enlarge the “effective servo bandwidth”. The DFF is studied in a realistic AO end-to-end simulation and compared with commonly used suppression concepts. For the observation in the faint (>13 mag) NGS regime, we obtain a Strehl ratio by a factor of two to four larger in comparison with a classical feedback control. The simulation realism is verified with real measurement data from the Large Binocular Telescope (LBT); the application for on-sky testing at the LBT and an implementation at the E-ELT in the MICADO instrument is discussed.

  7. Extensions of the Ferry shear wave model for active linear and nonlinear microrheology

    PubMed Central

    Mitran, Sorin M.; Forest, M. Gregory; Yao, Lingxing; Lindley, Brandon; Hill, David B.

    2009-01-01

    The classical oscillatory shear wave model of Ferry et al. [J. Polym. Sci. 2:593-611, (1947)] is extended for active linear and nonlinear microrheology. In the Ferry protocol, oscillation and attenuation lengths of the shear wave measured from strobe photographs determine storage and loss moduli at each frequency of plate oscillation. The microliter volumes typical in biology require modifications of experimental method and theory. Microbead tracking replaces strobe photographs. Reflection from the top boundary yields counterpropagating modes which are modeled here for linear and nonlinear viscoelastic constitutive laws. Furthermore, bulk imposed strain is easily controlled, and we explore the onset of normal stress generation and shear thinning using nonlinear viscoelastic models. For this paper, we present the theory, exact linear and nonlinear solutions where possible, and simulation tools more generally. We then illustrate errors in inverse characterization by application of the Ferry formulas, due to both suppression of wave reflection and nonlinearity, even if there were no experimental error. This shear wave method presents an active and nonlinear analog of the two-point microrheology of Crocker et al. [Phys. Rev. Lett. 85: 888 - 891 (2000)]. Nonlocal (spatially extended) deformations and stresses are propagated through a small volume sample, on wavelengths long relative to bead size. The setup is ideal for exploration of nonlinear threshold behavior. PMID:20011614

  8. Twin study on heritability of activity, attention, and impulsivity as assessed by objective measures.

    PubMed

    Heiser, Philip; Heinzel-Gutenbrunner, Monika; Frey, Joachim; Smidt, Judith; Grabarkiewicz, Justyna; Friedel, Susann; Kühnau, Wolfgang; Schmidtke, Jörg; Remschmidt, Helmut; Hebebrand, Johannes

    2006-05-01

    The purpose of this study was to assess heritability of activity, attention, and impulsivity by comparing young monozygotic (MZ) twins with dizygotic (DZ) twins using objective measures. The OPTAx test is an infrared motion analysis to record the movement pattern during a continuous performance test. Seventeen MZ and 12 same sexed DZ twin pairs in the range of 6 to 12 years were tested. The zygosity was determined by DNA-fingerprinting. The measures under investigation were activity (microevents and spatial scaling), impulsivity (errors of commission), and attention (accuracy and variability). For statistical analyses, the classical model of Falconer and the ACE and ADE genetic model for twin data were applied in order to estimate the proportion of the variance in activity, impulsivity and attention that is due to genetic effects. The respective coefficients of intraclass correlations in MZ twins ranged between .35 and .65 whereas for DZ twins the correlations were between .12 and .88. The heritability estimates resulting from both models were about 30% for 4 of the 5 measures, but none of these was significantly different from 0. We found no significant influence of genetic factors for activity, attention, and impulsivity. The authors conclude that further investigation of heritability of ADHD is necessary using larger sample sizes and objective measures.

  9. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  10. Control of noisy quantum systems: Field-theory approach to error mitigation

    NASA Astrophysics Data System (ADS)

    Hipolito, Rafael; Goldbart, Paul M.

    2016-04-01

    We consider the basic quantum-control task of obtaining a target unitary operation (i.e., a quantum gate) via control fields that couple to the quantum system and are chosen to best mitigate errors resulting from time-dependent noise, which frustrate this task. We allow for two sources of noise: fluctuations in the control fields and fluctuations arising from the environment. We address the issue of control-error mitigation by means of a formulation rooted in the Martin-Siggia-Rose (MSR) approach to noisy, classical statistical-mechanical systems. To do this, we express the noisy control problem in terms of a path integral, and integrate out the noise to arrive at an effective, noise-free description. We characterize the degree of success in error mitigation via a fidelity metric, which characterizes the proximity of the sought-after evolution to ones that are achievable in the presence of noise. Error mitigation is then best accomplished by applying the optimal control fields, i.e., those that maximize the fidelity subject to any constraints obeyed by the control fields. To make connection with MSR, we reformulate the fidelity in terms of a Schwinger-Keldysh (SK) path integral, with the added twist that the "forward" and "backward" branches of the time contour are inequivalent with respect to the noise. The present approach naturally and readily allows the incorporation of constraints on the control fields—a useful feature in practice, given that constraints feature in all real experiments. In addition to addressing the noise average of the fidelity, we consider its full probability distribution. The information content present in this distribution allows one to address more complex questions regarding error mitigation, including, in principle, questions of extreme value statistics, i.e., the likelihood and impact of rare instances of the fidelity and how to harness or cope with their influence. We illustrate this MSR-SK reformulation by considering a model system consisting of a single spin-s freedom (with s arbitrary), focusing on the case of 1 /f noise in the weak-noise limit. We discover that optimal error mitigation is accomplished via a universal control field protocol that is valid for all s , from the qubit (i.e., s =1 /2 ) case to the classical (i.e., s →∞ ) limit. In principle, this MSR-SK approach provides a transparent framework for addressing quantum control in the presence of noise for systems of arbitrary complexity.

  11. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  12. Measurement error corrected sodium and potassium intake estimation using 24-hour urinary excretion.

    PubMed

    Huang, Ying; Van Horn, Linda; Tinker, Lesley F; Neuhouser, Marian L; Carbone, Laura; Mossavar-Rahmani, Yasmin; Thomas, Fridtjof; Prentice, Ross L

    2014-02-01

    Epidemiological studies of the association of sodium and potassium intake with cardiovascular disease risk have almost exclusively relied on self-reported dietary data. Here, 24-hour urinary excretion assessments are used to correct the dietary self-report data for measurement error under the assumption that 24-hour urine recovery provides a biomarker that differs from usual intake according to a classical measurement model. Under this assumption, dietary self-reports underestimate sodium by 0% to 15%, overestimate potassium by 8% to 15%, and underestimate sodium/potassium ratio by ≈20% using food frequency questionnaires, 4-day food records, or three 24-hour dietary recalls in Women's Health Initiative studies. Calibration equations are developed by linear regression of log-transformed 24-hour urine assessments on corresponding log-transformed self-report assessments and several study subject characteristics. For each self-report method, the calibration equations turned out to depend on race and age and strongly on body mass index. After adjustment for temporal variation, calibration equations using food records or recalls explained 45% to 50% of the variation in (log-transformed) 24-hour urine assessments for sodium, 60% to 70% of the variation for potassium, and 55% to 60% of the variation for sodium/potassium ratio. These equations may be suitable for use in epidemiological disease association studies among postmenopausal women. The corresponding signals from food frequency questionnaire data were weak, but calibration equations for the ratios of sodium and potassium/total energy explained ≈35%, 50%, and 45% of log-biomarker variation for sodium, potassium, and their ratio, respectively, after the adjustment for temporal biomarker variation and may be suitable for cautious use in epidemiological studies. Clinical Trial Registration- URL: www.clinicaltrials.gov. Unique identifier: NCT00000611.

  13. Error Analysis and Validation for Insar Height Measurement Induced by Slant Range

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Li, T.; Fan, W.; Geng, X.

    2018-04-01

    InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.

  14. The Global Error Assessment (GEA) model for the selection of differentially expressed genes in microarray data.

    PubMed

    Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan

    2004-11-01

    Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R software is freely available upon request to authors.

  15. Error disclosure: a new domain for safety culture assessment.

    PubMed

    Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J

    2012-07-01

    To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.

  16. Detection of stably bright squeezed light with the quantum noise reduction of 12.6  dB by mutually compensating the phase fluctuations.

    PubMed

    Yang, Wenhai; Shi, Shaoping; Wang, Yajun; Ma, Weiguang; Zheng, Yaohui; Peng, Kunchi

    2017-11-01

    We present a mutual compensation scheme of three phase fluctuations, originating from the residual amplitude modulation (RAM) in the phase modulation process, in the bright squeezed light generation system. The influence of the RAM on each locking loop is harmonized by using one electro-optic modulator (EOM), and the direction of the phase fluctuation is manipulated by positioning the photodetector (PD) that extracts the error signal before or after the optical parametric amplifier (OPA). Therefore a bright squeezed light with non-classical noise reduction of π is obtained. By fitting the squeezing and antisqueezing measurement results, we confirm that the total phase fluctuation of the system is around 3.1 mrad. The fluctuation of the noise suppression is 0.2 dB for 3 h.

  17. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    PubMed

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  18. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  19. Microwave quantum illumination.

    PubMed

    Barzanjeh, Shabir; Guha, Saikat; Weedbrook, Christian; Vitali, David; Shapiro, Jeffrey H; Pirandola, Stefano

    2015-02-27

    Quantum illumination is a quantum-optical sensing technique in which an entangled source is exploited to improve the detection of a low-reflectivity object that is immersed in a bright thermal background. Here, we describe and analyze a system for applying this technique at microwave frequencies, a more appropriate spectral region for target detection than the optical, due to the naturally occurring bright thermal background in the microwave regime. We use an electro-optomechanical converter to entangle microwave signal and optical idler fields, with the former being sent to probe the target region and the latter being retained at the source. The microwave radiation collected from the target region is then phase conjugated and upconverted into an optical field that is combined with the retained idler in a joint-detection quantum measurement. The error probability of this microwave quantum-illumination system, or quantum radar, is shown to be superior to that of any classical microwave radar of equal transmitted energy.

  20. Performance of patients with schizophrenia on the Wisconsin Card Sorting Test (WCST).

    PubMed

    Everett, J; Lavoie, K; Gagnon, J F; Gosselin, N

    2001-03-01

    To directly compare the performance of patients with schizophrenia and control subjects on the Wisconsin Card Sorting Test (WCST). Specifically, we sought to verify if there are significant differences on the "classical" WCST measurements (perseverative errors and number of categories), as well as on more rarely reported scores, and assess the extent to which patients with schizophrenia can improve their performance with card-by-card instructions and continuous verbal reinforcement. Prospective cross-sectional study. Psychiatry department in a university-affiliated hospital. 30 patients with schizophrenia, diagnosed according to DSM-IV criteria, and 30 control subjects, matched to patients according to age and education. The WCST was administered according to the criteria of Heaton, and a subgroup of the patients with schizophrenia was given a retest after an explanation of the WCST and verbal reinforcements. Patients with schizophrenia succeeded on fewer categories (t = 23.3, p < 0.001), committed more perseverative errors (t = 15.6, p < 0.001), made more perseverative responses (t = 14.6, p < 0.001), needed more trials to succeed at the first category (t = 9.2, p < 0.003) and gave significantly lower conceptual level responses (t = 14.1, p < 0.001) than the controls. However, on retest, patients with schizophrenia committed significantly fewer perseverative errors (t = 5.1, p < 0.001) and showed higher conceptual level responses (t = -3.45, p < 0.003). Consistent with a hypothesis of frontal dysfunction in schizophrenia, patients with schizophrenia tend to show a perseverative deficit; however, some are able to partially overcome this deficit when given verbal reinforcement.

Top