Sample records for improving zero-error classical

  1. Activation of zero-error classical capacity in low-dimensional quantum systems

    NASA Astrophysics Data System (ADS)

    Park, Jeonghoon; Heo, Jun

    2018-06-01

    Channel capacities of quantum channels can be nonadditive even if one of two quantum channels has no channel capacity. We call this phenomenon activation of the channel capacity. In this paper, we show that when we use a quantum channel on a qubit system, only a noiseless qubit channel can generate the activation of the zero-error classical capacity. In particular, we show that the zero-error classical capacity of two quantum channels on qubit systems cannot be activated. Furthermore, we present a class of examples showing the activation of the zero-error classical capacity in low-dimensional systems.

  2. Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels

    NASA Astrophysics Data System (ADS)

    Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis

    2013-01-01

    We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.

  3. Zero Thermal Noise in Resistors at Zero Temperature

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2016-06-01

    The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.

  4. Robustification and Optimization in Repetitive Control For Minimum Phase and Non-Minimum Phase Systems

    NASA Astrophysics Data System (ADS)

    Prasitmeeboon, Pitcha

    Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.

  5. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    PubMed

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  6. Quantum cryptography protocols robust against photon number splitting attacks for weak laser pulse implementations.

    PubMed

    Scarani, Valerio; Acín, Antonio; Ribordy, Grégoire; Gisin, Nicolas

    2004-02-06

    We introduce a new class of quantum key distribution protocols, tailored to be robust against photon number splitting (PNS) attacks. We study one of these protocols, which differs from the original protocol by Bennett and Brassard (BB84) only in the classical sifting procedure. This protocol is provably better than BB84 against PNS attacks at zero error.

  7. Efficient Variational Quantum Simulator Incorporating Active Error Minimization

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2017-04-01

    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.

  8. The zero-multipole summation method for estimating electrostatic interactions in molecular dynamics: analysis of the accuracy and application to liquid systems.

    PubMed

    Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki

    2014-05-21

    In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ∼ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.

  9. The zero-multipole summation method for estimating electrostatic interactions in molecular dynamics: Analysis of the accuracy and application to liquid systems

    NASA Astrophysics Data System (ADS)

    Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki

    2014-05-01

    In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ˜ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.

  10. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  11. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  12. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  13. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  14. On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping

    PubMed Central

    Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A.; Wang, Yi; Spincemaille, Pascal

    2016-01-01

    Purpose Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. Materials and Methods High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Results Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p < 0.001; p < 0.001), which was higher than that of post-zero padded QSM (p < 0.001; p < 0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p = 0.004; p < 0.001). Conclusion Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. PMID:27587225

  15. On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping.

    PubMed

    Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A; Wang, Yi; Spincemaille, Pascal

    2017-01-01

    Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p<0.001; p<0.001), which was higher than that of post-zero padded QSM (p<0.001; p<0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p=0.004; p<0.001). Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series

    NASA Astrophysics Data System (ADS)

    Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.

    2018-03-01

    Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.

  17. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    NASA Astrophysics Data System (ADS)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  18. The contrasting roles of Planck's constant in classical and quantum theories

    NASA Astrophysics Data System (ADS)

    Boyer, Timothy H.

    2018-04-01

    We trace the historical appearance of Planck's constant in physics, and we note that initially the constant did not appear in connection with quanta. Furthermore, we emphasize that Planck's constant can appear in both classical and quantum theories. In both theories, Planck's constant sets the scale of atomic phenomena. However, the roles played in the foundations of the theories are sharply different. In quantum theory, Planck's constant is crucial to the structure of the theory. On the other hand, in classical electrodynamics, Planck's constant is optional, since it appears only as the scale factor for the (homogeneous) source-free contribution to the general solution of Maxwell's equations. Since classical electrodynamics can be solved while taking the homogenous source-free contribution in the solution as zero or non-zero, there are naturally two different theories of classical electrodynamics, one in which Planck's constant is taken as zero and one where it is taken as non-zero. The textbooks of classical electromagnetism present only the version in which Planck's constant is taken to vanish.

  19. On the multiple zeros of a real analytic function with applications to the averaging theory of differential equations

    NASA Astrophysics Data System (ADS)

    García, Isaac A.; Llibre, Jaume; Maza, Susanna

    2018-06-01

    In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.

  20. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  1. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  2. A novel conditioning process for enhancing dewaterability of waste activated sludge by combination of zero-valent iron and persulfate.

    PubMed

    Zhou, Xu; Wang, Qilin; Jiang, Guangming; Liu, Peng; Yuan, Zhiguo

    2015-06-01

    Improvement of sludge dewaterability is crucial for reducing the costs of sludge disposal in wastewater treatment plants. This study presents a novel conditioning method for improving waste activated sludge dewaterability by combination of persulfate and zero-valent iron. The combination of zero-valent iron (0-30g/L) and persulfate (0-6g/L) under neutral pH substantially enhanced the sludge dewaterability due to the advanced oxidization reactions. The highest enhancement of sludge dewaterability was achieved at 4g persulfate/L and 15g zero-valent iron/L, with which the capillary suction time was reduced by over 50%. The release of soluble chemical oxygen demand during the conditioning process implied the decomposition of sludge structure and microorganisms, which facilitated the improvement of dewaterability due to the release of bound water that was included in sludge structure and microorganism. Economic analysis showed that the proposed conditioning process with persulfate and ZVI is more economically favorable for improving WAS dewaterability than classical Fenton reagent. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Performance of convolutionally encoded noncoherent MFSK modem in fading channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.

    1976-01-01

    The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.

  4. New approach for identifying the zero-order fringe in variable wavelength interferometry

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Litwin, Dariusz; Daszkiewicz, Marek

    2016-12-01

    The family of VAWI techniques (for transmitted and reflected light) is especially efficient for characterizing objects, when in the interference system the optical path difference exceeds a few wavelengths. The classical approach that consists in measuring the deflection of interference fringes fails because of strong edge effects. Broken continuity of interference fringes prevents from correct identification of the zero order fringe, which leads to significant errors. The family of these methods has been proposed originally by Professor Pluta in the 1980s but that time image processing facilities and computers were hardly available. Automated devices unfold a completely new approach to the classical measurement procedures. The Institute team has taken that new opportunity and transformed the technique into fully automated measurement devices offering commercial readiness of industry-grade quality. The method itself has been modified and new solutions and algorithms simultaneously have extended the field of application. This has concerned both construction aspects of the systems and software development in context of creating computerized instruments. The VAWI collection of instruments constitutes now the core of the Institute commercial offer. It is now practically applicable in industrial environment for measuring textile and optical fibers, strips of thin films, testing of wave plates and nonlinear affects in different materials. This paper describes new algorithms for identifying the zero order fringe, which increases the performance of the system as a whole and presents some examples of measurements of optical elements.

  5. Quantifying the statistical importance of utilizing regression over classic energy intensity calculations for tracking efficiency improvements in industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nimbalkar, Sachin U.; Wenning, Thomas J.; Guo, Wei

    In the United States, manufacturing facilities account for about 32% of total domestic energy consumption in 2014. Robust energy tracking methodologies are critical to understanding energy performance in manufacturing facilities. Due to its simplicity and intuitiveness, the classic energy intensity method (i.e. the ratio of total energy use over total production) is the most widely adopted. However, the classic energy intensity method does not take into account the variation of other relevant parameters (i.e. product type, feed stock type, weather, etc.). Furthermore, the energy intensity method assumes that the facilities’ base energy consumption (energy use at zero production) is zero,more » which rarely holds true. Therefore, it is commonly recommended to utilize regression models rather than the energy intensity approach for tracking improvements at the facility level. Unfortunately, many energy managers have difficulties understanding why regression models are statistically better than utilizing the classic energy intensity method. While anecdotes and qualitative information may convince some, many have major reservations about the accuracy of regression models and whether it is worth the time and effort to gather data and build quality regression models. This paper will explain why regression models are theoretically and quantitatively more accurate for tracking energy performance improvements. Based on the analysis of data from 114 manufacturing plants over 12 years, this paper will present quantitative results on the importance of utilizing regression models over the energy intensity methodology. This paper will also document scenarios where regression models do not have significant relevance over the energy intensity method.« less

  6. Toward simulating complex systems with quantum effects

    NASA Astrophysics Data System (ADS)

    Kenion-Hanrath, Rachel Lynn

    Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.

  7. Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.

    1985-01-01

    Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.

  8. Improved Airborne Gravity Results Using New Relative Gravity Sensor Technology

    NASA Astrophysics Data System (ADS)

    Brady, N.

    2013-12-01

    Airborne gravity data has contributed greatly to our knowledge of subsurface geophysics particularly in rugged and otherwise inaccessible areas such as Antarctica. Reliable high quality GPS data has renewed interest in improving the accuracy of airborne gravity systems and recent improvements in the electronic control of the sensor have increased the accuracy and ability of the classic Lacoste and Romberg zero length spring gravity meters to operate in turbulent air conditions. Lacoste and Romberg type gravity meters provide increased sensitivity over other relative gravity meters by utilizing a mass attached to a horizontal beam which is balanced by a ';zero length spring'. This type of dynamic gravity sensor is capable of measuring gravity changes on the order of 0.05 milliGals in laboratory conditions but more commonly 0.7 to 1 milliGal in survey use. The sensor may have errors induced by the electronics used to read the beam position as well as noise induced by unwanted accelerations, commonly turbulence, which moves the beam away from its ideal balance position otherwise known as the reading line. The sensor relies on a measuring screw controlled by a computer which attempts to bring the beam back to the reading line position. The beam is also heavily damped so that it does not react to most unwanted high frequency accelerations. However this heavily damped system is slow to react, particularly in turns where there are very high Eotvos effects. New sensor technology utilizes magnetic damping of the beam coupled with an active feedback system which acts to effectively keep the beam locked at the reading line position. The feedback system operates over the entire range of the system so there is now no requirement for a measuring screw. The feedback system operates at very high speed so that even large turbulent events have minimal impact on data quality and very little, if any, survey line data is lost because of large beam displacement errors. Airborne testing along with results from ground based van testing and laboratory results have shown that the new sensor provides more consistent gravity data, as measured by repeated line surveys, as well as preserving the inherent sensitivity of the Lacoste and Romberg zero length spring design. The sensor also provides reliability during survey operation as there is no mechanical counter screw. Results will be presented which show the advantages of the new sensor system over the current technology in both data quality and survey productivity. Applications include high resolution geoid mapping, crustal structure investigations and resource mapping of minerals, oil and gas.

  9. Zero-point energy constraint in quasi-classical trajectory calculations.

    PubMed

    Xie, Zhen; Bowman, Joel M

    2006-04-27

    A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.

  10. Characterizing the performance of XOR games and the Shannon capacity of graphs.

    PubMed

    Ramanathan, Ravishankar; Kay, Alastair; Murta, Gláucia; Horodecki, Paweł

    2014-12-12

    In this Letter we give a set of necessary and sufficient conditions such that quantum players of a two-party XOR game cannot perform any better than classical players. With any such game, we associate a graph and examine its zero-error communication capacity. This allows us to specify a broad new class of graphs for which the Shannon capacity can be calculated. The conditions also enable the parametrization of new families of games that have no quantum advantage for arbitrary input probability distributions, up to certain symmetries. In the future, these might be used in information-theoretic studies on reproducing the set of quantum nonlocal correlations.

  11. Analytical minimization of synchronicity errors in stochastic identification

    NASA Astrophysics Data System (ADS)

    Bernal, D.

    2018-01-01

    An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.

  12. [Discussion on six errors of formulas corresponding to syndromes in using the classic formulas].

    PubMed

    Bao, Yan-ju; Hua, Bao-jin

    2012-12-01

    The theory of formulas corresponding to syndromes is one of the characteristics of Treatise on Cold Damage and Miscellaneous Diseases (Shanghan Zabing Lun) and one of the main principles in applying classic prescriptions. It is important to take effect by following the principle of formulas corresponding to syndromes. However, some medical practitioners always feel that the actual clinical effect is far less than expected. Six errors in the use of classic prescriptions as well as the theory of formulas corresponding to syndromes are the most important causes to be considered, i.e. paying attention only to the local syndromes while neglecting the whole, paying attention only to formulas corresponding to syndromes while neglecting the pathogenesis, paying attention only to syndromes while neglecting the pulse diagnosis, paying attention only to unilateral prescription but neglecting the combined prescriptions, paying attention only to classic prescriptions while neglecting the modern formulas, and paying attention only to the formulas but neglecting the drug dosage. Therefore, not only the patients' clinical syndromes, but also the combination of main syndrome and pathogenesis simultaneously is necessary in the clinical applications of classic prescriptions and the theory of prescription corresponding to syndrome. In addition, comprehensive syndrome differentiation, modern formulas, current prescriptions, combined prescriptions, and drug dosage all contribute to avoid clinical errors and improve clinical effects.

  13. Improving the Numerical Stability of Fast Matrix Multiplication

    DOE PAGES

    Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...

    2016-10-04

    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less

  14. Dark Signal Characterization of 1.7 micron cutoff devices for SNAP

    NASA Astrophysics Data System (ADS)

    Smith, R. M.; SNAP Collaboration

    2004-12-01

    We report initial progress characterizing non-photometric sources of error -- dark current, noise, and zero point drift -- for 1.7 micron cutoff HgCdTe and InGaAs detectors under development by Raytheon, Rockwell, and Sensors Unlimited for SNAP. Dark current specifications can already be met with several detector types. Changes to the manufacturing process are being explored to improve the noise reduction available through multiple sampling. In some cases, a significant number of pixels suffer from popcorn noise, with a few percent of all pixels exhibiting a ten fold noise increase. A careful study of zero point drifts is also under way, since these errors can dominate dark current, and may contribute to the noise degradation seen in long exposures.

  15. Feedback attitude sliding mode regulation control of spacecraft using arm motion

    NASA Astrophysics Data System (ADS)

    Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu

    2013-09-01

    The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.

  16. Research on the Forward and Reverse Calculation Based on the Adaptive Zero-Velocity Interval Adjustment for the Foot-Mounted Inertial Pedestrian-Positioning System

    PubMed Central

    Wang, Qiuying; Guo, Zheng; Sun, Zhiguo; Cui, Xufei; Liu, Kaiyue

    2018-01-01

    Pedestrian-positioning technology based on the foot-mounted micro inertial measurement unit (MIMU) plays an important role in the field of indoor navigation and has received extensive attention in recent years. However, the positioning accuracy of the inertial-based pedestrian-positioning method is rapidly reduced because of the relatively low measurement accuracy of the measurement sensor. The zero-velocity update (ZUPT) is an error correction method which was proposed to solve the cumulative error because, on a regular basis, the foot is stationary during the ordinary gait; this is intended to reduce the position error growth of the system. However, the traditional ZUPT has poor performance because the time of foot touchdown is short when the pedestrians move faster, which decreases the positioning accuracy. Considering these problems, a forward and reverse calculation method based on the adaptive zero-velocity interval adjustment for the foot-mounted MIMU location method is proposed in this paper. To solve the inaccuracy of the zero-velocity interval detector during fast pedestrian movement where the contact time of the foot on the ground is short, an adaptive zero-velocity interval detection algorithm based on fuzzy logic reasoning is presented in this paper. In addition, to improve the effectiveness of the ZUPT algorithm, forward and reverse multiple solutions are presented. Finally, with the basic principles and derivation process of this method, the MTi-G710 produced by the XSENS company is used to complete the test. The experimental results verify the correctness and applicability of the proposed method. PMID:29883399

  17. 31 CFR 363.138 - Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money... Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in error? We are not liable for any deposits of...

  18. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    PubMed

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.

  19. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839

  20. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  1. Static properties of ferromagnetic quantum chains: Numerical results and experimental data on two S=1/2 systems (invited)

    NASA Astrophysics Data System (ADS)

    Kopinga, K.; Delica, T.; Leschke, H.

    1990-05-01

    New results of a variant of the numerically exact quantum transfer matrix method have been compared with experimental data on the static properties of [C6H11NH3]CuBr3(CHAB), a ferromagnetic system with about 5% easy-plane anisotropy. Above T=3.5 K, the available data on the zero-field heat capacity, the excess heat capacity ΔC=C(B)-C(B=0), and the magnetization are described with an accuracy comparable to the experimental error. Calculations of the spin-spin correlation functions reveal that the good description of the experimental correlation length in CHAB by a classical spin model is largely accidental. The zero-field susceptibility, which can be deduced from these correlation functions, is in fair agreement with the reported experimental data between 4 and 100 K. The method also seems to yield accurate results for the chlorine isomorph, CHAC, a system with about 2% uniaxial anisotropy.

  2. Improving dewaterability of waste activated sludge by combined conditioning with zero-valent iron and hydrogen peroxide.

    PubMed

    Zhou, Xu; Wang, Qilin; Jiang, Guangming; Zhang, Xiwang; Yuan, Zhiguo

    2014-12-01

    Improvement of sludge dewaterability is crucial for reducing the costs of sludge disposal in wastewater treatment plants. This study presents a novel method based on combined conditioning with zero-valent iron (ZVI) and hydrogen peroxide (HP) at pH 2.0 to improve dewaterability of a full-scale waste activated sludge (WAS). The combination of ZVI (0-750mg/L) and HP (0-750mg/L) at pH 2.0 substantially improved the WAS dewaterability due to Fenton-like reactions. The highest improvement in WAS dewaterability was attained at 500mg ZVI/L and 250mg HP/L, when the capillary suction time of the WAS was reduced by approximately 50%. Particle size distribution indicated that the sludge flocs were decomposed after conditioning. Economic analysis showed that combined conditioning with ZVI and HP was a more economically favorable method for improving WAS dewaterability than the classical Fenton reaction based method initiated by ferrous salts and HP. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    PubMed Central

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  4. Generation of a non-zero discord bipartite state with classical second-order interference.

    PubMed

    Choi, Yujun; Hong, Kang-Hee; Lim, Hyang-Tag; Yune, Jiwon; Kwon, Osung; Han, Sang-Wook; Oh, Kyunghwan; Kim, Yoon-Ho; Kim, Yong-Su; Moon, Sung

    2017-02-06

    We report an investigation on quantum discord in classical second-order interference. In particular, we theoretically show that a bipartite state with D = 0.311 of discord can be generated via classical second-order interference. We also experimentally verify the theory by obtaining D = 0.197 ± 0.060 of non-zero discord state. Together with the fact that the nonclassicalities originated from physical constraints and information theoretic perspectives are not equivalent, this result provides an insight to understand the nature of quantum discord.

  5. Topological magnetoplasmon

    PubMed Central

    Jin, Dafei; Lu, Ling; Wang, Zhong; Fang, Chen; Joannopoulos, John D.; Soljačić, Marin; Fu, Liang; Fang, Nicholas X.

    2016-01-01

    Classical wave fields are real-valued, ensuring the wave states at opposite frequencies and momenta to be inherently identical. Such a particle–hole symmetry can open up new possibilities for topological phenomena in classical systems. Here we show that the historically studied two-dimensional (2D) magnetoplasmon, which bears gapped bulk states and gapless one-way edge states near-zero frequency, is topologically analogous to the 2D topological p+ip superconductor with chiral Majorana edge states and zero modes. We further predict a new type of one-way edge magnetoplasmon at the interface of opposite magnetic domains, and demonstrate the existence of zero-frequency modes bounded at the peripheries of a hollow disk. These findings can be readily verified in experiment, and can greatly enrich the topological phases in bosonic and classical systems. PMID:27892453

  6. Topological magnetoplasmon

    DOE PAGES

    Jin, Dafei; Lu, Ling; Wang, Zhong; ...

    2016-11-28

    Classical wave fields are real-valued, ensuring the wave states at opposite frequencies and momenta to be inherently identical. Such a particle–hole symmetry can open up new possibilities for topological phenomena in classical systems. Here we show that the historically studied two-dimensional (2D) magnetoplasmon, which bears gapped bulk states and gapless one-way edge states near-zero frequency, is topologically analogous to the 2D topological p+ip superconductor with chiral Majorana edge states and zero modes. We further predict a new type of one-way edge magnetoplasmon at the interface of opposite magnetic domains, and demonstrate the existence of zero-frequency modes bounded at the peripheriesmore » of a hollow disk. Finally, these findings can be readily verified in experiment, and can greatly enrich the topological phases in bosonic and classical systems.« less

  7. Reconstruction of finite-valued sparse signals

    NASA Astrophysics Data System (ADS)

    Keiper, Sandra; Kutyniok, Gitta; Lee, Dae Gwan; Pfander, Götz

    2017-08-01

    The need of reconstructing discrete-valued sparse signals from few measurements, that is solving an undetermined system of linear equations, appears frequently in science and engineering. Those signals appear, for example, in error correcting codes as well as massive Multiple-Input Multiple-Output (MIMO) channel and wideband spectrum sensing. A particular example is given by wireless communications, where the transmitted signals are sequences of bits, i.e., with entries in f0; 1g. Whereas classical compressed sensing algorithms do not incorporate the additional knowledge of the discrete nature of the signal, classical lattice decoding approaches do not utilize sparsity constraints. In this talk, we present an approach that incorporates a discrete values prior into basis pursuit. In particular, we address finite-valued sparse signals, i.e., sparse signals with entries in a finite alphabet. We will introduce an equivalent null space characterization and show that phase transition takes place earlier than when using the classical basis pursuit approach. We will further discuss robustness of the algorithm and show that the nonnegative case is very different from the bipolar one. One of our findings is that the positioning of the zero in the alphabet - i.e., whether it is a boundary element or not - is crucial.

  8. An alternative method to estimate zero flow temperature differences for Granier's thermal dissipation technique.

    PubMed

    Regalado, Carlos M; Ritter, Axel

    2007-08-01

    Calibration of the Granier thermal dissipation technique for measuring stem sap flow in trees requires determination of the temperature difference (DeltaT) between a heated and an unheated probe when sap flow is zero (DeltaT(max)). Classically, DeltaT(max) has been estimated from the maximum predawn DeltaT, assuming that sap flow is negligible at nighttime. However, because sap flow may continue during the night, the maximum predawn DeltaT value may underestimate the true DeltaT(max). No alternative method has yet been proposed to estimate DeltaT(max) when sap flow is non-zero at night. A sensitivity analysis is presented showing that errors in DeltaT(max) may amplify through sap flux density computations in Granier's approach, such that small amounts of undetected nighttime sap flow may lead to large diurnal sap flux density errors, hence the need for a correct estimate of DeltaT(max). By rearranging Granier's original formula, an optimization method to compute DeltaT(max) from simultaneous measurements of diurnal DeltaT and micrometeorological variables, without assuming that sap flow is negligible at night, is presented. Some illustrative examples are shown for sap flow measurements carried out on individuals of Erica arborea L., which has needle-like leaves, and Myrica faya Ait., a broadleaf species. We show that, although DeltaT(max) values obtained by the proposed method may be similar in some instances to the DeltaT(max) predicted at night, in general the values differ. The procedure presented has the potential of being applied not only to Granier's method, but to other heat-based sap flow systems that require a zero flow calibration, such as the Cermák et al. (1973) heat balance method and the T-max heat pulse system of Green et al. (2003).

  9. The Classical Vacuum.

    ERIC Educational Resources Information Center

    Boyer, Timothy H.

    1985-01-01

    The classical vacuum of physics is not empty, but contains a distinctive pattern of electromagnetic fields. Discovery of the vacuum, thermal spectrum, classical electron theory, zero-point spectrum, and effects of acceleration are discussed. Connection between thermal radiation and the classical vacuum reveals unexpected unity in the laws of…

  10. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  11. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  12. Signed reward prediction errors drive declarative learning

    PubMed Central

    Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493

  13. Signed reward prediction errors drive declarative learning.

    PubMed

    De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.

  14. Increasing accuracy in the interval analysis by the improved format of interval extension based on the first order Taylor series

    NASA Astrophysics Data System (ADS)

    Li, Yi; Xu, Yan Long

    2018-05-01

    When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.

  15. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  16. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987

  17. Improved motor control method with measurements of fiber optics gyro (FOG) for dual-axis rotational inertial navigation system (RINS).

    PubMed

    Song, Tianxiao; Wang, Xueyun; Liang, Wenwei; Xing, Li

    2018-05-14

    Benefiting from frame structure, RINS can improve the navigation accuracy by modulating the inertial sensor errors with proper rotation scheme. In the traditional motor control method, the measurements of the photoelectric encoder are always adopted to drive inertial measurement unit (IMU) to rotate. However, when carrier conducts heading motion, the inertial sensor errors may no longer be zero-mean in navigation coordinate. Meanwhile, some high-speed carriers like aircraft need to roll a certain angle to balance the centrifugal force during the heading motion, which may result in non-negligible coupling errors, caused by the FOG installation errors and scale factor errors. Moreover, the error parameters of FOG are susceptible to the temperature and magnetic field, and the pre-calibration is a time-consuming process which is difficult to completely suppress the FOG-related errors. In this paper, an improved motor control method with the measurements of FOG is proposed to address these problems, with which the outer frame can insulate the carrier's roll motion and the inner frame can simultaneously achieve the rotary modulation on the basis of insulating the heading motion. The results of turntable experiments indicate that the navigation performance of dual-axis RINS has been significantly improved over the traditional method, which could still be maintained even with large FOG installation errors and scale factor errors, proving that the proposed method can relax the requirements for the accuracy of FOG-related errors.

  18. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  19. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  20. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  1. Signals of Opportunity Navigation Using Wi-Fi Signals

    DTIC Science & Technology

    2011-03-24

    Identifier . . . . . . . . . . . . . . . . . . . . . . . 54 MVM Mean Value Method . . . . . . . . . . . . . . . . . . . . . 60 SDM Scaled Differential...the mean value ( MVM ) and scaled differential (SDM) methods. An error was logged if the UI 60 correlation algorithm identified a packet index that did...Notable from this graph is that a window of 50 packets appears to provide zero errors for MVM and near zero errors for SDM. Also notable is that a

  2. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  3. Polarization errors associated with birefringent waveplates

    NASA Technical Reports Server (NTRS)

    West, Edward A.; Smith, Matthew H.

    1995-01-01

    Although zero-order quartz waveplates are widely used in instrumentation that needs good temperature and field-of-view characteristics, the residual errors associated with these devices can be very important in high-resolution polarimetry measurements. How the field-of-view characteristics are affected by retardation errors and the misalignment of optic axes in a double-crystal waveplate is discussed. The retardation measurements made on zero-order quartz and single-order 'achromatic' waveplates and how the misalignment errors affect those measurements are discussed.

  4. Trajectory-based understanding of the quantum-classical transition for barrier scattering

    NASA Astrophysics Data System (ADS)

    Chou, Chia-Chun

    2018-06-01

    The quantum-classical transition of wave packet barrier scattering is investigated using a hydrodynamic description in the framework of a nonlinear Schrödinger equation. The nonlinear equation provides a continuous description for the quantum-classical transition of physical systems by introducing a degree of quantumness. Based on the transition equation, the transition trajectory formalism is developed to establish the connection between classical and quantum trajectories. The quantum-classical transition is then analyzed for the scattering of a Gaussian wave packet from an Eckart barrier and the decay of a metastable state. Computational results for the evolution of the wave packet and the transmission probabilities indicate that classical results are recovered when the degree of quantumness tends to zero. Classical trajectories are in excellent agreement with the transition trajectories in the classical limit, except in some regions where transition trajectories cannot cross because of the single-valuedness of the transition wave function. As the computational results demonstrate, the process that the Planck constant tends to zero is equivalent to the gradual removal of quantum effects originating from the quantum potential. This study provides an insightful trajectory interpretation for the quantum-classical transition of wave packet barrier scattering.

  5. 33 CFR 154.2181 - Alternative testing program-Test requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CE test must check the calibrated range of each analyzer using a lower (zero) and upper (span... instrument, R = reference value of zero or high-level calibration gas introduced into the monitoring system... Difference Zero Span 1-Zero 1-Span 2-Zero 2-Span 3-Zero 3-Span Mean Difference = Calibration Error = % % (3...

  6. Legitimate Techniques for Improving the R-Square and Related Statistics of a Multiple Regression Model

    DTIC Science & Technology

    1981-01-01

    explanatory variable has been ommitted. Ramsey (1974) has developed a rather interesting test for detecting specification errors using estimates of the...Peter. (1979) A Guide to Econometrics , Cambridge, MA: The MIT Press. Ramsey , J.B. (1974), "Classical Model Selection Through Specification Error... Tests ," in P. Zarembka, Ed. Frontiers in Econometrics , New York: Academia Press. Theil, Henri. (1971), Principles of Econometrics , New York: John Wiley

  7. Multiple description distributed image coding with side information for mobile wireless transmission

    NASA Astrophysics Data System (ADS)

    Wu, Min; Song, Daewon; Chen, Chang Wen

    2005-03-01

    Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.

  8. Fractional order implementation of Integral Resonant Control - A nanopositioning application.

    PubMed

    San-Millan, Andres; Feliu-Batlle, Vicente; Aphale, Sumeet S

    2017-10-04

    By exploiting the co-located sensor-actuator arrangement in typical flexure-based piezoelectric stack actuated nanopositioners, the polezero interlacing exhibited by their axial frequency response can be transformed to a zero-pole interlacing by adding a constant feed-through term. The Integral Resonant Control (IRC) utilizes this unique property to add substantial damping to the dominant resonant mode by the use of a simple integrator implemented in closed loop. IRC used in conjunction with an integral tracking scheme, effectively reduces positioning errors introduced by modelling inaccuracies or parameter uncertainties. Over the past few years, successful application of the IRC control technique to nanopositioning systems has demonstrated performance robustness, easy tunability and versatility. The main drawback has been the relatively small positioning bandwidth achievable. This paper proposes a fractional order implementation of the classical integral tracking scheme employed in tandem with the IRC scheme to deliver damping and tracking. The fractional order integrator introduces an additional design parameter which allows desired pole-placement, resulting in superior closed loop bandwidth. Simulations and experimental results are presented to validate the theory. A 250% improvement in the achievable positioning bandwidth is observed with proposed fractional order scheme. Copyright © 2017. Published by Elsevier Ltd.

  9. Errata report on Herbert Goldstein's Classical Mechanics: Second edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.; Hoffman, F.M.

    This report describes errors in Herbert Goldstein's textbook Classical Mechanics, Second Edition (Copyright 1980, ISBN 0-201-02918-9). Some of the errors in current printings of the text were corrected in the second printing; however, after communicating with Addison Wesley, the publisher for Classical Mechanics, it was discovered that the corrected galley proofs had been lost by the printer and that no one had complained of any errors in the eleven years since the second printing. The errata sheet corrects errors from all printings of the second edition.

  10. A Rasch Perspective

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Smith, Everett V., Jr.

    2007-01-01

    Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…

  11. The Zero Product Principle Error.

    ERIC Educational Resources Information Center

    Padula, Janice

    1996-01-01

    Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)

  12. Analysis and improvement of the quantum image matching

    NASA Astrophysics Data System (ADS)

    Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin

    2017-11-01

    We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.

  13. Anonymous broadcasting of classical information with a continuous-variable topological quantum code

    NASA Astrophysics Data System (ADS)

    Menicucci, Nicolas C.; Baragiola, Ben Q.; Demarie, Tommaso F.; Brennen, Gavin K.

    2018-03-01

    Broadcasting information anonymously becomes more difficult as surveillance technology improves, but remarkably, quantum protocols exist that enable provably traceless broadcasting. The difficulty is making scalable entangled resource states that are robust to errors. We propose an anonymous broadcasting protocol that uses a continuous-variable surface-code state that can be produced using current technology. High squeezing enables large transmission bandwidth and strong anonymity, and the topological nature of the state enables local error mitigation.

  14. Impact of nonzero boresight pointing errors on the performance of a relay-assisted free-space optical communication system over exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin

    2016-09-20

    The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.

  15. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  16. Public classical communication in quantum cryptography: Error correction, integrity, and authentication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timofeev, A. V.; Pomozov, D. I.; Makkaveev, A. P.

    2007-05-15

    Quantum cryptography systems combine two communication channels: a quantum and a classical one. (They can be physically implemented in the same fiber-optic link, which is employed as a quantum channel when one-photon states are transmitted and as a classical one when it carries classical data traffic.) Both channels are supposed to be insecure and accessible to an eavesdropper. Error correction in raw keys, interferometer balancing, and other procedures are performed by using the public classical channel. A discussion of the requirements to be met by the classical channel is presented.

  17. Effect of Slice Error of Glass on Zero Offset of Capacitive Accelerometer

    NASA Astrophysics Data System (ADS)

    Hao, R.; Yu, H. J.; Zhou, W.; Peng, B.; Guo, J.

    2018-03-01

    Packaging process had been studied on capacitance accelerometer. The silicon-glass bonding process had been adopted on sensor chip and glass, and sensor chip and glass was adhered on ceramic substrate, the three-layer structure was curved due to the thermal mismatch, the slice error of glass lead to asymmetrical curve of sensor chip. Thus, the sensitive mass of accelerometer deviated along the sensitive direction, which was caused in zero offset drift. It was meaningful to confirm the influence of slice error of glass, the simulation results showed that the zero output drift was 12.3×10-3 m/s2 when the deviation was 40μm.

  18. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Novel approaches to estimating the turbulent kinetic energy dissipation rate from low- and moderate-resolution velocity fluctuation time series

    NASA Astrophysics Data System (ADS)

    Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.

    2017-11-01

    In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.

  20. Fixing Stellarator Magnetic Surfaces

    NASA Astrophysics Data System (ADS)

    Hanson, James D.

    1999-11-01

    Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.

  1. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    NASA Astrophysics Data System (ADS)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  2. Quantum and classical ripples in graphene

    NASA Astrophysics Data System (ADS)

    Hašík, Juraj; Tosatti, Erio; MartoÅák, Roman

    2018-04-01

    Thermal ripples of graphene are well understood at room temperature, but their quantum counterparts at low temperatures are in need of a realistic quantitative description. Here we present atomistic path-integral Monte Carlo simulations of freestanding graphene, which show upon cooling a striking classical-quantum evolution of height and angular fluctuations. The crossover takes place at ever-decreasing temperatures for ever-increasing wavelengths so that a completely quantum regime is never attained. Zero-temperature quantum graphene is flatter and smoother than classical graphene at large scales yet rougher at short scales. The angular fluctuation distribution of the normals can be quantitatively described by coexistence of two Gaussians, one classical strongly T -dependent and one quantum about 2° wide, of zero-point character. The quantum evolution of ripple-induced height and angular spread should be observable in electron diffraction in graphene and other two-dimensional materials, such as MoS2, bilayer graphene, boron nitride, etc.

  3. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  4. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  5. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  6. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  7. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  8. The Witness-Voting System

    NASA Astrophysics Data System (ADS)

    Gerck, Ed

    We present a new, comprehensive framework to qualitatively improve election outcome trustworthiness, where voting is modeled as an information transfer process. Although voting is deterministic (all ballots are counted), information is treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, are explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with a provably optimal design that is applicable to any type of voting, with or without ballots. Sixteen voting system requirements, including functional, performance, environmental and non-functional considerations, are derived and rated, meeting or exceeding current public-election requirements. The voter and the vote are unlinkable (secret ballot) although each is identifiable. The Witness-Voting System (Gerck, 2001) is extended as a conforming implementation of the provably optimal design that is error-free, transparent, simple, scalable, robust, receipt-free, universally-verifiable, 100% voter-verified, and end-to-end audited.

  9. An alternative method for centrifugal compressor loading factor modelling

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  10. Zero in the brain: A voxel-based lesion-symptom mapping study in right hemisphere damaged patients.

    PubMed

    Benavides-Varela, Silvia; Passarini, Laura; Butterworth, Brian; Rolma, Giuseppe; Burgio, Francesca; Pitteri, Marco; Meneghello, Francesca; Shallice, Tim; Semenza, Carlo

    2016-04-01

    Transcoding numerals containing zero is more problematic than transcoding numbers formed by non-zero digits. However, it is currently unknown whether this is due to zeros requiring brain areas other than those traditionally associated with number representation. Here we hypothesize that transcoding zeros entails visuo-spatial and integrative processes typically associated with the right hemisphere. The investigation involved 22 right-brain-damaged patients and 20 healthy controls who completed tests of reading and writing Arabic numbers. As expected, the most significant deficit among patients involved a failure to cope with zeros. Moreover, a voxel-based lesion-symptom mapping (VLSM) analysis showed that the most common zero-errors were maximally associated to the right insula which was previously related to sensorimotor integration, attention, and response selection, yet for the first time linked to transcoding processes. Error categories involving other digits corresponded to the so-called Neglect errors, which however, constituted only about 10% of the total reading and 3% of the writing mistakes made by the patients. We argue that damage to the right hemisphere impairs the mechanism of parsing, and the ability to set-up empty-slot structures required for processing zeros in complex numbers; moreover, we suggest that the brain areas located in proximity to the right insula play a role in the integration of the information resulting from the temporary application of transcoding procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Probabilistic model of nonlinear penalties due to collision-induced timing jitter for calculation of the bit error ratio in wavelength-division-multiplexed return-to-zero systems

    NASA Astrophysics Data System (ADS)

    Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.

    2006-12-01

    We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.

  12. The Charles F. Prentice Award Lecture 2005: optics of the human eye: progress and problems.

    PubMed

    Charman, W Neil

    2006-06-01

    The history of measurements of ocular aberration is briefly reviewed and recent work using much-improved aberrometers and large samples of eyes is summarized. When on-axis, higher-order, monochromatic aberrations are averaged, undercorrected, positive, fourth-order spherical aberration dominates; other Zernike wavefront aberration coefficients have average values near zero. Individually, however, many eyes show substantial amounts of third-order and other fourth-order aberrations; the value of these varies idiosyncratically about zero. Most normal eyes show only small amounts of axial monochromatic aberration for photopic pupils up to around 3 mm; the limits to retinal image quality are then usually set by diffraction, uncorrected or imperfectly corrected spherocylindrical refractive error, accommodation error, and chromatic aberration. Longitudinal chromatic aberration varies very little across the population. With larger mesopic and scotopic pupils, monochromatic aberration plays a more important optical role, but overall visual performance is increasingly dominated by neural factors. Some remaining problems in measuring and modeling the eye's optical performance are discussed.

  13. Classical many-particle systems with unique disordered ground states

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2017-10-01

    Classical ground states (global energy-minimizing configurations) of many-particle systems are typically unique crystalline structures, implying zero enumeration entropy of distinct patterns (aside from trivial symmetry operations). By contrast, the few previously known disordered classical ground states of many-particle systems are all high-entropy (highly degenerate) states. Here we show computationally that our recently proposed "perfect-glass" many-particle model [Sci. Rep. 6, 36963 (2016), 10.1038/srep36963] possesses disordered classical ground states with a zero entropy: a highly counterintuitive situation . For all of the system sizes, parameters, and space dimensions that we have numerically investigated, the disordered ground states are unique such that they can always be superposed onto each other or their mirror image. At low energies, the density of states obtained from simulations matches those calculated from the harmonic approximation near a single ground state, further confirming ground-state uniqueness. Our discovery provides singular examples in which entropy and disorder are at odds with one another. The zero-entropy ground states provide a unique perspective on the celebrated Kauzmann-entropy crisis in which the extrapolated entropy of a supercooled liquid drops below that of the crystal. We expect that our disordered unique patterns to be of value in fields beyond glass physics, including applications in cryptography as pseudorandom functions with tunable computational complexity.

  14. Power of one nonclean qubit

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi

    2017-04-01

    The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.

  15. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  16. Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials

    PubMed Central

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212

  17. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  18. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Exact, E = 0, classical and quantum solutions for general power-law oscillators

    NASA Technical Reports Server (NTRS)

    Nieto, Michael Martin; Daboul, Jamil

    1995-01-01

    For zero energy, E = 0, we derive exact, classical and quantum solutions for all power-law oscillators with potentials V(r) = -gamma/r(exp nu), gamma greater than 0 and -infinity less than nu less than infinity. When the angular momentum is non-zero, these solutions lead to the classical orbits (p(t) = (cos mu(phi(t) - phi(sub 0)t))(exp 1/mu) with mu = nu/2 - 1 does not equal 0. For nu greater than 2, the orbits are bound and go through the origin. We calculate the periods and precessions of these bound orbits, and graph a number of specific examples. The unbound orbits are also discussed in detail. Quantum mechanically, this system is also exactly solvable. We find that when nu is greater than 2 the solutions are normalizable (bound), as in the classical case. Further, there are normalizable discrete, yet unbound, states. They correspond to unbound classical particles which reach infinity in a finite time. Finally, the number of space dimensions of the system can determine whether or not an E = 0 state is bound. These and other interesting comparisons to the classical system will be discussed.

  20. A Strategy for Replacing Sum Scoring

    ERIC Educational Resources Information Center

    Ramsay, James O.; Wiberg, Marie

    2017-01-01

    This article promotes the use of modern test theory in testing situations where sum scores for binary responses are now used. It directly compares the efficiencies and biases of classical and modern test analyses and finds an improvement in the root mean squared error of ability estimates of about 5% for two designed multiple-choice tests and…

  1. A device adaptive inflow boundary condition for Wigner equations of quantum transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Haiyan; Lu, Tiao; Cai, Wei, E-mail: wcai@uncc.edu

    2014-02-01

    In this paper, an improved inflow boundary condition is proposed for Wigner equations in simulating a resonant tunneling diode (RTD), which takes into consideration the band structure of the device. The original Frensley inflow boundary condition prescribes the Wigner distribution function at the device boundary to be the semi-classical Fermi–Dirac distribution for free electrons in the device contacts without considering the effect of the quantum interaction inside the quantum device. The proposed device adaptive inflow boundary condition includes this effect by assigning the Wigner distribution to the value obtained from the Wigner transform of wave functions inside the device atmore » zero external bias voltage, thus including the dominant effect on the electron distribution in the contacts due to the device internal band energy profile. Numerical results on computing the electron density inside the RTD under various incident waves and non-zero bias conditions show much improvement by the new boundary condition over the traditional Frensley inflow boundary condition.« less

  2. Error analysis and experiments of attitude measurement using laser gyroscope

    NASA Astrophysics Data System (ADS)

    Ren, Xin-ran; Ma, Wen-li; Jiang, Ping; Huang, Jin-long; Pan, Nian; Guo, Shuai; Luo, Jun; Li, Xiao

    2018-03-01

    The precision of photoelectric tracking and measuring equipment on the vehicle and vessel is deteriorated by the platform's movement. Specifically, the platform's movement leads to the deviation or loss of the target, it also causes the jitter of visual axis and then produces image blur. In order to improve the precision of photoelectric equipment, the attitude of photoelectric equipment fixed with the platform must be measured. Currently, laser gyroscope is widely used to measure the attitude of the platform. However, the measurement accuracy of laser gyro is affected by its zero bias, scale factor, installation error and random error. In this paper, these errors were analyzed and compensated based on the laser gyro's error model. The static and dynamic experiments were carried out on a single axis turntable, and the error model was verified by comparing the gyro's output with an encoder with an accuracy of 0.1 arc sec. The accuracy of the gyroscope has increased from 7000 arc sec to 5 arc sec for an hour after error compensation. The method used in this paper is suitable for decreasing the laser gyro errors in inertial measurement applications.

  3. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    NASA Astrophysics Data System (ADS)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  4. Demand Controlled Economizer Cycles: A Direct Digital Control Scheme for Heating, Ventilating, and Air Conditioning Systems,

    DTIC Science & Technology

    1984-05-01

    Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F

  5. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  6. Making classical ground-state spin computing fault-tolerant.

    PubMed

    Crosson, I J; Bacon, D; Brown, K R

    2010-09-01

    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.

  7. Modeling number of claims and prediction of total claim amount

    NASA Astrophysics Data System (ADS)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  8. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  9. Trajectory description of the quantum–classical transition for wave packet interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    2016-08-15

    The quantum–classical transition for wave packet interference is investigated using a hydrodynamic description. A nonlinear quantum–classical transition equation is obtained by introducing a degree of quantumness ranging from zero to one into the classical time-dependent Schrödinger equation. This equation provides a continuous description for the transition process of physical systems from purely quantum to purely classical regimes. In this study, the transition trajectory formalism is developed to provide a hydrodynamic description for the quantum–classical transition. The flow momentum of transition trajectories is defined by the gradient of the action function in the transition wave function and these trajectories follow themore » main features of the evolving probability density. Then, the transition trajectory formalism is employed to analyze the quantum–classical transition of wave packet interference. For the collision-like wave packet interference where the propagation velocity is faster than the spreading speed of the wave packet, the interference process remains collision-like for all the degree of quantumness. However, the interference features demonstrated by transition trajectories gradually disappear when the degree of quantumness approaches zero. For the diffraction-like wave packet interference, the interference process changes continuously from a diffraction-like to collision-like case when the degree of quantumness gradually decreases. This study provides an insightful trajectory interpretation for the quantum–classical transition of wave packet interference.« less

  10. Cola soft drinks for evaluating the bioaccessibility of uranium in contaminated mine soils.

    PubMed

    Lottermoser, Bernd G; Schnug, Ewald; Haneklaus, Silvia

    2011-08-15

    There is a rising need for scientifically sound and quantitative as well as simple, rapid, cheap and readily available soil testing procedures. The purpose of this study was to explore selected soft drinks (Coca-Cola Classic®, Diet Coke®, Coke Zero®) as indicators of bioaccessible uranium and other trace elements (As, Ce, Cu, La, Mn, Ni, Pb, Th, Y, Zn) in contaminated soils of the Mary Kathleen uranium mine site, Australia. Data of single extraction tests using Coca-Cola Classic®, Diet Coke® and Coke Zero® demonstrate that extractable arsenic, copper, lanthanum, manganese, nickel, yttrium and zinc concentrations correlate significantly with DTPA- and CaCl₂-extractable metals. Moreover, the correlation between DTPA-extractable uranium and that extracted using Coca-Cola Classic® is close to unity (+0.98), with reduced correlations for Diet Coke® (+0.66) and Coke Zero® (+0.55). Also, Coca-Cola Classic® extracts uranium concentrations near identical to DTPA, whereas distinctly higher uranium fractions were extracted using Diet Coke® and Coke Zero®. Results of this study demonstrate that the use of Coca-Cola Classic® in single extraction tests provided an excellent indication of bioaccessible uranium in the analysed soils and of uranium uptake into leaves and stems of the Sodom apple (Calotropis procera). Moreover, the unconventional reagent is superior in terms of availability, costs, preparation and disposal compared to traditional chemicals. Contaminated site assessments and rehabilitation of uranium mine sites require a solid understanding of the chemical speciation of environmentally significant elements for estimating their translocation in soils and plant uptake. Therefore, Cola soft drinks have potential applications in single extraction tests of uranium contaminated soils and may be used for environmental impact assessments of uranium mine sites, nuclear fuel processing plants and waste storage and disposal facilities. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. On a method for generating inequalities for the zeros of certain functions

    NASA Astrophysics Data System (ADS)

    Gatteschi, Luigi; Giordano, Carla

    2007-10-01

    In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.

  13. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  14. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction.

    PubMed

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-10-16

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.

  15. Automatic Locking of Laser Frequency to an Absorption Peak

    NASA Technical Reports Server (NTRS)

    Koch, Grady J.

    2006-01-01

    An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that constantly adjusts the frequency in an effort to drive the error to zero. When the laser frequency deviates from the midpeak value but remains within the locking range, the magnitude and sign of the error signal indicate the amount of detuning and the control circuitry adjusts the frequency by what it estimates to be the negative of this amount in an effort to bring the error to zero.

  16. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  17. A biomimetic algorithm for the improved detection of microarray features

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.

    2007-02-01

    One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.

  18. 31 CFR 363.138 - Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...

  19. 31 CFR 363.138 - Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...

  20. 31 CFR 363.138 - Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...

  1. Construction of the Second Quito Astrolabe Catalogue

    NASA Astrophysics Data System (ADS)

    Kolesnik, Y. B.

    1994-03-01

    A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.

  2. XZP + 1d and XZP + 1d-DKH basis sets for second-row elements: application to CCSD(T) zero-point vibrational energy and atomization energy calculations.

    PubMed

    Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A

    2012-09-01

    Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).

  3. Using CAS to Solve Classical Mathematics Problems

    ERIC Educational Resources Information Center

    Burke, Maurice J.; Burroughs, Elizabeth A.

    2009-01-01

    Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…

  4. The high accuracy data processing system of laser interferometry signals based on MSP430

    NASA Astrophysics Data System (ADS)

    Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong

    2009-07-01

    Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.

  5. Classical simulation of quantum error correction in a Fibonacci anyon code

    NASA Astrophysics Data System (ADS)

    Burton, Simon; Brell, Courtney G.; Flammia, Steven T.

    2017-02-01

    Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.

  6. Deviations from Vegard's law in semiconductor thin films measured with X-ray diffraction and Rutherford backscattering: The Ge1-ySny and Ge1-xSix cases

    NASA Astrophysics Data System (ADS)

    Xu, Chi; Senaratne, Charutha L.; Culbertson, Robert J.; Kouvetakis, John; Menéndez, José

    2017-09-01

    The compositional dependence of the lattice parameter in Ge1-ySny alloys has been determined from combined X-ray diffraction and Rutherford Backscattering (RBS) measurements of a large set of epitaxial films with compositions in the 0 < y < 0.14 range. In view of contradictory prior results, a critical analysis of this method has been carried out, with emphasis on nonlinear elasticity corrections and systematic errors in popular RBS simulation codes. The approach followed is validated by showing that measurements of Ge1-xSix films yield a bowing parameter θGeSi =-0.0253(30) Å, in excellent agreement with the classic work by Dismukes. When the same methodology is applied to Ge1-ySny alloy films, it is found that the bowing parameter θGeSn is zero within experimental error, so that the system follows Vegard's law. This is in qualitative agreement with ab initio theory, but the value of the experimental bowing parameter is significantly smaller than the theoretical prediction. Possible reasons for this discrepancy are discussed in detail.

  7. Continuous fractional-order Zero Phase Error Tracking Control.

    PubMed

    Liu, Lu; Tian, Siyuan; Xue, Dingyu; Zhang, Tao; Chen, YangQuan

    2018-04-01

    A continuous time fractional-order feedforward control algorithm for tracking desired time varying input signals is proposed in this paper. The presented controller cancels the phase shift caused by the zeros and poles of controlled closed-loop fractional-order system, so it is called Fractional-Order Zero Phase Tracking Controller (FZPETC). The controlled systems are divided into two categories i.e. with and without non-cancellable (non-minimum-phase) zeros which stand in unstable region or on stability boundary. Each kinds of systems has a targeted FZPETC design control strategy. The improved tracking performance has been evaluated successfully by applying the proposed controller to three different kinds of fractional-order controlled systems. Besides, a modified quasi-perfect tracking scheme is presented for those systems which may not have available future tracking trajectory information or have problem in high frequency disturbance rejection if the perfect tracking algorithm is applied. A simulation comparison and a hardware-in-the-loop thermal peltier platform are shown to validate the practicality of the proposed quasi-perfect control algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Nonperturbative interpretation of the Bloch vector's path beyond the rotating-wave approximation

    NASA Astrophysics Data System (ADS)

    Benenti, Giuliano; Siccardi, Stefano; Strini, Giuliano

    2013-09-01

    The Bloch vector's path of a two-level system exposed to a monochromatic field exhibits, in the regime of strong coupling, complex corkscrew trajectories. By considering the infinitesimal evolution of the two-level system when the field is treated as a classical object, we show that the Bloch vector's rotation speed oscillates between zero and twice the rotation speed predicted by the rotating wave approximation. Cusps appear when the rotation speed vanishes. We prove analytically that in correspondence to cusps the curvature of the Bloch vector's path diverges. On the other hand, numerical data show that the curvature is very large even for a quantum field in the deep quantum regime with mean number of photons n¯≲1. We finally compute numerically the typical error size in a quantum gate when the terms beyond rotating wave approximation are neglected.

  9. Normal forms for Hopf-Zero singularities with nonconservative nonlinear part

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh; Sanders, Jan A.

    In this paper we are concerned with the simplest normal form computation of the systems x˙=2xf(x,y2+z2), y˙=z+yf(x,y2+z2), z˙=-y+zf(x,y2+z2), where f is a formal function with real coefficients and without any constant term. These are the classical normal forms of a larger family of systems with Hopf-Zero singularity. Indeed, these are defined such that this family would be a Lie subalgebra for the space of all classical normal form vector fields with Hopf-Zero singularity. The simplest normal forms and simplest orbital normal forms of this family with nonzero quadratic part are computed. We also obtain the simplest parametric normal form of any non-degenerate perturbation of this family within the Lie subalgebra. The symmetry group of the simplest normal forms is also discussed. This is a part of our results in decomposing the normal forms of Hopf-Zero singular systems into systems with a first integral and nonconservative systems.

  10. Solving the patient zero inverse problem by using generalized simulated annealing

    NASA Astrophysics Data System (ADS)

    Menin, Olavo H.; Bauch, Chris T.

    2018-01-01

    Identifying patient zero - the initially infected source of a given outbreak - is an important step in epidemiological investigations of both existing and emerging infectious diseases. Here, the use of the Generalized Simulated Annealing algorithm (GSA) to solve the inverse problem of finding the source of an outbreak is studied. The classical disease natural histories susceptible-infected (SI), susceptible-infected-susceptible (SIS), susceptible-infected-recovered (SIR) and susceptible-infected-recovered-susceptible (SIRS) in a regular lattice are addressed. Both the position of patient zero and its time of infection are considered unknown. The algorithm performance with respect to the generalization parameter q˜v and the fraction ρ of infected nodes for whom infection was ascertained is assessed. Numerical experiments show the algorithm is able to retrieve the epidemic source with good accuracy, even when ρ is small, but present no evidence to support that GSA performs better than its classical version. Our results suggest that simulated annealing could be a helpful tool for identifying patient zero in an outbreak where not all cases can be ascertained.

  11. Self-spectral calibration for spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Xianling; Gao, Wanrong; Bian, Haiyi; Chen, Chaoliang; Liao, Jiuling

    2013-06-01

    A different real-time self-wavelength calibration method for spectral domain optical coherence tomography is presented in which interference spectra measured from two arbitrary points on the tissue surface are used for calibration. The method takes advantages of two favorable conditions of optical coherence tomography (OCT) signal. First, the signal back-scattered from the tissue surface is generally much stronger than that from positions in the tissue interior, so the spectral component of the surface interference could be extracted from the measured spectrum. Second, the tissue surface is not a plane and a phase difference exists between the light reflected from two different points on the surface. Compared with the zero-crossing automatic method, the introduced method has the advantage of removing the error due to dispersion mismatch or the common phase error. The method is tested experimentally to demonstrate the improved signal-to-noise ratio, higher axial resolution, and slower sensitivity degradation with depth when compared to the use of the zero-crossing method and applied to two-dimensional cross-sectional images of human finger skin.

  12. ULTRA-STABILIZED D. C. AMPLIFIER

    DOEpatents

    Hartwig, E.C.; Kuenning, R.W.; Acker, R.C.

    1959-02-17

    An improved circuit is described for stabilizing the drift and minimizing the noise and hum level of d-c amplifiers so that the output voltage will be zero when the input is zero. In its detailed aspects, the disclosed circuit incorporates a d-c amplifier having a signal input, a second input, and an output circuit coupled back to the first input of the amplifier through inverse feedback means. An electronically driven chopper having a pair of fixed contacts and a moveable contact alternately connects the two inputs of a difference amplifier to the signal input. The A. E. error signal produced in the difference amplifier is amplified, rectified, and applied to the second input of the amplifier as the d-c stabilizing voltage.

  13. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  14. A comparison of zero-order, first-order, and monod biotransformation models

    USGS Publications Warehouse

    Bekins, B.A.; Warren, E.; Godsy, E.M.

    1998-01-01

    Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.

  15. Planckian charged black holes in ultraviolet self-complete quantum gravity

    NASA Astrophysics Data System (ADS)

    Nicolini, Piero

    2018-03-01

    We present an analysis of the role of the charge within the self-complete quantum gravity paradigm. By studying the classicalization of generic ultraviolet improved charged black hole solutions around the Planck scale, we showed that the charge introduces important differences with respect to the neutral case. First, there exists a family of black hole parameters fulfilling the particle-black hole condition. Second, there is no extremal particle-black hole solution but quasi extremal charged particle-black holes at the best. We showed that the Hawking emission disrupts the condition of particle-black hole. By analyzing the Schwinger pair production mechanism, the charge is quickly shed and the particle-black hole condition can ultimately be restored in a cooling down phase towards a zero temperature configuration, provided non-classical effects are taken into account.

  16. Input Shaping to Reduce Solar Array Structural Vibrations

    NASA Technical Reports Server (NTRS)

    Doherty, Michael J.; Tolson, Robert J.

    1998-01-01

    Structural vibrations induced by actuators can be minimized using input shaping. Input shaping is a feedforward method in which actuator commands are convolved with shaping functions to yield a shaped set of commands. These commands are designed to perform the maneuver while minimizing the residual structural vibration. In this report, input shaping is extended to stepper motor actuators. As a demonstration, an input-shaping technique based on pole-zero cancellation was used to modify the Solar Array Drive Assembly (SADA) actuator commands for the Lewis satellite. A series of impulses were calculated as the ideal SADA output for vibration control. These impulses were then discretized for use by the SADA stepper motor actuator and simulated actuator outputs were used to calculate the structural response. The effectiveness of input shaping is limited by the accuracy of the knowledge of the modal frequencies. Assuming perfect knowledge resulted in significant vibration reduction. Errors of 10% in the modal frequencies caused notably higher levels of vibration. Controller robustness was improved by incorporating additional zeros in the shaping function. The additional zeros did not require increased performance from the actuator. Despite the identification errors, the resulting feedforward controller reduced residual vibrations to the level of the exactly modeled input shaper and well below the baseline cases. These results could be easily applied to many other vibration-sensitive applications involving stepper motor actuators.

  17. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  18. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  19. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  20. Expeditious reconciliation for practical quantum key distribution

    NASA Astrophysics Data System (ADS)

    Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.

    2004-08-01

    The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.

  1. Quantification of immobilized Candida antarctica lipase B (CALB) using ICP-AES combined with Bradford method.

    PubMed

    Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L

    2017-02-01

    The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2001-01-01

    Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.

  3. Evaluation of parameters for particles acceleration by the zero-point field of quantum electrodynamics

    NASA Technical Reports Server (NTRS)

    Rueda, A.

    1985-01-01

    That particles may be accelerated by vacuum effects in quantum field theory has been repeatedly proposed in the last few years. A natural upshot of this is a mechanism for cosmic rays (CR) primaries acceleration. A mechanism for acceleration by the zero-point field (ZPE) when the ZPE is taken in a realistic sense (in opposition to a virtual field) was considered. Originally the idea was developed within a semiclassical context. The classical Einstein-Hopf model (EHM) was used to show that free isolated electromagnrtically interacting particles performed a random walk in phase space and more importantly in momentum space when submitted to the perennial action of the so called classical electromagnrtic ZPE.

  4. Impact of including or excluding both-armed zero-event studies on using standard meta-analysis methods for rare event outcome: a simulation study

    PubMed Central

    Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana

    2016-01-01

    Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725

  5. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  6. Error measuring system of rotary Inductosyn

    NASA Astrophysics Data System (ADS)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  7. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  8. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  9. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo

    DOE PAGES

    Kent, Paul R.; Krogel, Jaron T.

    2017-06-22

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  10. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data

    PubMed Central

    Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172

  11. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.

    PubMed

    Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.

  12. Quantum Clique Gossiping.

    PubMed

    Li, Bo; Li, Shuang; Wu, Junfeng; Qi, Hongsheng

    2018-02-09

    This paper establishes a framework of quantum clique gossiping by introducing local clique operations to networks of interconnected qubits. Cliques are local structures in complex networks being complete subgraphs, which can be used to accelerate classical gossip algorithms. Based on cyclic permutations, clique gossiping leads to collective multi-party qubit interactions. We show that at reduced states, these cliques have the same acceleration effects as their roles in accelerating classical gossip algorithms. For randomized selection of cliques, such improved rate of convergence is precisely characterized. On the other hand, the rate of convergence at the coherent states of the overall quantum network is proven to be decided by the spectrum of a mean-square error evolution matrix. Remarkably, the use of larger quantum cliques does not necessarily increase the speed of the network density aggregation, suggesting quantum network dynamics is not entirely decided by its classical topology.

  13. White matter changes in an untreated, newly diagnosed case of classical homocystinuria.

    PubMed

    Brenton, J Nicholas; Matsumoto, Julie A; Rust, Robert S; Wilson, William G

    2014-01-01

    The authors report the case of a 4-year-old boy who developed progressive unilateral weakness and developmental delays prior to his diagnosis of classical homocystinuria. Magnetic resonance imaging (MRI) of the brain demonstrated diffuse white matter changes, raising the concern for a secondary diagnosis causing leukoencephalopathy, since classical homocystinuria is not typically associated with these changes. Other inborn errors of the transsulfuration pathway have been reported as causing these changes. Once begun on therapy for his homocystinuria, his neurologic deficits resolved and his delays rapidly improved. Repeat MRI performed one year after instating therapy showed resolution of his white matter abnormalities. This case illustrates the need to consider homocystinuria and other amino acidopathies in the differential diagnosis of childhood white matter diseases and lends weight to the hypothesis that hypermethioninemia may induce white matter changes.

  14. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  15. A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Meldi, M.; Poux, A.

    2017-10-01

    A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.

  16. Refraction test

    MedlinePlus

    ... purpose is to determine whether you have a refractive error (a need for glasses or contact lenses). For ... glasses or contact lenses) is normal, then the refractive error is zero (plano) and your vision should be ...

  17. A Bayesian zero-truncated approach for analysing capture-recapture count data from classical scrapie surveillance in France.

    PubMed

    Vergne, Timothée; Calavas, Didier; Cazeau, Géraldine; Durand, Benoît; Dufour, Barbara; Grosbois, Vladimir

    2012-06-01

    Capture-recapture (CR) methods are used to study populations that are monitored with imperfect observation processes. They have recently been applied to the monitoring of animal diseases to evaluate the number of infected units that remain undetected by the surveillance system. This paper proposes three bayesian models to estimate the total number of scrapie-infected holdings in France from CR count data obtained from the French classical scrapie surveillance programme. We fitted two zero-truncated Poisson (ZTP) models (with and without holding size as a covariate) and a zero-truncated negative binomial (ZTNB) model to the 2006 national surveillance count dataset. We detected a large amount of heterogeneity in the count data, making the use of the simple ZTP model inappropriate. However, including holding size as a covariate did not bring any significant improvement over the simple ZTP model. The ZTNB model proved to be the best model, giving an estimation of 535 (CI(95%) 401-796) infected and detectable sheep holdings in 2006, although only 141 were effectively detected, resulting in a holding-level prevalence of 4.4‰ (CI(95%) 3.2-6.3) and a sensitivity of holding-level surveillance of 26% (CI(95%) 18-35). The main limitation of the present study was the small amount of data collected during the surveillance programme. It was therefore not possible to build complex models that would allow depicting more accurately the epidemiological and detection processes that generate the surveillance data. We discuss the perspectives of capture-recapture count models in the context of animal disease surveillance. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  19. "Simulated molecular evolution" or computer-generated artifacts?

    PubMed

    Darius, F; Rojas, R

    1994-11-01

    1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.

  20. A time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes with applications in substance abuse research.

    PubMed

    Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne

    2017-02-28

    This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Finite-time control for nonlinear spacecraft attitude based on terminal sliding mode technique.

    PubMed

    Song, Zhankui; Li, Hongxing; Sun, Kaibiao

    2014-01-01

    In this paper, a fast terminal sliding mode control (FTSMC) scheme with double closed loops is proposed for the spacecraft attitude control. The FTSMC laws are included both in an inner control loop and an outer control loop. Firstly, a fast terminal sliding surface (FTSS) is constructed, which can drive the inner loop tracking-error and the outer loop tracking-error on the FTSS to converge to zero in finite time. Secondly, FTSMC strategy is designed by using Lyaponov's method for ensuring the occurrence of the sliding motion in finite time, which can hold the character of fast transient response and improve the tracking accuracy. It is proved that FTSMC can guarantee the convergence of tracking-error in both approaching and sliding mode surface. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Implementation of a MFAC based position sensorless drive for high speed BLDC motors with nonideal back EMF.

    PubMed

    Li, Haitao; Ning, Xin; Li, Wenzhuo

    2017-03-01

    In order to improve the reliability and reduce power consumption of the high speed BLDC motor system, this paper presents a model free adaptive control (MFAC) based position sensorless drive with only a dc-link current sensor. The initial commutation points are obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. According to the commutation error caused by the low pass filter (LPF) and other factors, the relationship between commutation error angle and dc-link current is analyzed, a corresponding MFAC based control method is proposed, and the commutation error can be corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    2002-03-01

    This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.

  4. A zero-augmented generalized gamma regression calibration to adjust for covariate measurement error: A case of an episodically consumed dietary intake

    PubMed Central

    Agogo, George O.

    2017-01-01

    Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599

  5. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  6. Research on the impact factors of GRACE precise orbit determination by dynamic method

    NASA Astrophysics Data System (ADS)

    Guo, Nan-nan; Zhou, Xu-hua; Li, Kai; Wu, Bin

    2018-07-01

    With the successful use of GPS-only-based POD (precise orbit determination), more and more satellites carry onboard GPS receivers to support their orbit accuracy requirements. It provides continuous GPS observations in high precision, and becomes an indispensable way to obtain the orbit of LEO satellites. Precise orbit determination of LEO satellites plays an important role for the application of LEO satellites. Numerous factors should be considered in the POD processing. In this paper, several factors that impact precise orbit determination are analyzed, namely the satellite altitude, the time-variable earth's gravity field, the GPS satellite clock error and accelerometer observation. The GRACE satellites provide ideal platform to study the performance of factors for precise orbit determination using zero-difference GPS data. These factors are quantitatively analyzed on affecting the accuracy of dynamic orbit using GRACE observations from 2005 to 2011 by SHORDE software. The study indicates that: (1) with the altitude of the GRACE satellite is lowered from 480 km to 460 km in seven years, the 3D (three-dimension) position accuracy of GRACE satellite orbit is about 3˜4 cm based on long spans data; (2) the accelerometer data improves the 3D position accuracy of GRACE in about 1 cm; (3) the accuracy of zero-difference dynamic orbit is about 6 cm with the GPS satellite clock error products in 5 min sampling interval and can be raised to 4 cm, if the GPS satellite clock error products with 30 s sampling interval can be adopted. (4) the time-variable part of earth gravity field model improves the 3D position accuracy of GRACE in about 0.5˜1.5 cm. Based on this study, we quantitatively analyze the factors that affect precise orbit determination of LEO satellites. This study plays an important role to improve the accuracy of LEO satellites orbit determination.

  7. Multiple indicators, multiple causes measurement error models

    DOE PAGES

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...

    2014-06-25

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less

  8. Multiple Indicators, Multiple Causes Measurement Error Models

    PubMed Central

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-01-01

    Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535

  9. Multiple indicators, multiple causes measurement error models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less

  10. A Method for Estimating Zero-Flow Pressure and Intracranial Pressure

    PubMed Central

    Caren, Marzban; Paul, Raymond Illian; David, Morison; Anne, Moore; Michel, Kliot; Marek, Czosnyka; Pierre, Mourad

    2012-01-01

    Background It has been hypothesized that critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method employs extrapolation of arterial blood pressure versus blood-flow velocity. The aim of this study is to improve ICP predictions. Methods Two revisions are considered: 1) The linear model employed for extrapolation is extended to a nonlinear equation, and 2) the parameters of the model are estimated by an alternative criterion (not least-squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP, from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. Results The revisions lead to qualitative (e.g., precluding negative ICP) and quantitative improvements in ICP prediction. In going from the original to the revised method, the ±2 standard deviation of error is reduced from 33 to 24 mm Hg; the root-mean-squared error (RMSE) is reduced from 11 to 8.2 mm Hg. The distribution of RMSE is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared to 5.1 and 18.8 mm Hg for the original method. Conclusions Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed which may lead to clinically useful results. PMID:22824923

  11. Efficient Z gates for quantum computing

    NASA Astrophysics Data System (ADS)

    McKay, David C.; Wood, Christopher J.; Sheldon, Sarah; Chow, Jerry M.; Gambetta, Jay M.

    2017-08-01

    For superconducting qubits, microwave pulses drive rotations around the Bloch sphere. The phase of these drives can be used to generate zero-duration arbitrary virtual Z gates, which, combined with two Xπ /2 gates, can generate any SU(2) gate. Here we show how to best utilize these virtual Z gates to both improve algorithms and correct pulse errors. We perform randomized benchmarking using a Clifford set of Hadamard and Z gates and show that the error per Clifford is reduced versus a set consisting of standard finite-duration X and Y gates. Z gates can correct unitary rotation errors for weakly anharmonic qubits as an alternative to pulse-shaping techniques such as derivative removal by adiabatic gate (DRAG). We investigate leakage and show that a combination of DRAG pulse shaping to minimize leakage and Z gates to correct rotation errors realizes a 13.3 ns Xπ /2 gate characterized by low error [1.95 (3 ) ×10-4] and low leakage [3.1 (6 ) ×10-6] . Ultimately leakage is limited by the finite temperature of the qubit, but this limit is two orders of magnitude smaller than pulse errors due to decoherence.

  12. A toolkit for measurement error correction, with a focus on nutritional epidemiology

    PubMed Central

    Keogh, Ruth H; White, Ian R

    2014-01-01

    Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385

  13. Multi-frequency bioelectrical impedance: a comparison between the Cole-Cole modelling and Hanai equations with the classical impedance index approach.

    PubMed

    Deurenberg, P; Andreoli, A; de Lorenzo, A

    1996-01-01

    Total body water and extracellular water were measured by deuterium oxide and bromide dilution respectively in 23 healthy males and 25 healthy females. In addition, total body impedance was measured at 17 frequencies, ranging from 1 kHz to 1350 kHz. Modelling programs were used to extrapolate impedance values to frequency zero (extracellular resistance) and frequency infinity (total body water resistance). Impedance indexes (height2/Zf) were computed at all 17 frequencies. The estimation errors of extracellular resistance and total body water resistance were 1% and 3%, respectively. Impedance and impedance index at low frequency were correlated with extracellular water, independent of the amount of total body water. Total body water showed the greatest correlation with impedance and impedance index at high frequencies. Extrapolated impedance values did not show a higher correlation compared to measured values. Prediction formulas from the literature applied to fixed frequencies showed the best mean and individual predictions for both extracellular water and total body water. It is concluded that, at least in healthy individuals with normal body water distribution, modelling impedance data has no advantage over impedance values measured at fixed frequencies, probably due to estimation errors in the modelled data.

  14. Observable signatures of a classical transition

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew C.; Lin, Wei

    2016-03-01

    Eternal inflation arising from a potential landscape predicts that our universe is one realization of many possible cosmological histories. One way to access different cosmological histories is via the nucleation of bubble universes from a metastable false vacuum. Another way to sample different cosmological histories is via classical transitions, the creation of pocket universes through the collision between bubbles. Using relativistic numerical simulations, we examine the possibility of observationally determining if our observable universe resulted from a classical transition. We find that classical transitions produce spatially infinite, approximately open Friedman-Robertson-Walker universes. The leading set of observables in the aftermath of a classical transition are negative spatial curvature and a contribution to the Cosmic Microwave Background temperature quadrupole. The level of curvature and magnitude of the quadrupole are dependent on the position of the observer, and we determine the possible range of observables for two classes of single-scalar field models. For the first class, where the inflationary phase has a lower energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generally falls to zero with distance from the collision while the spatial curvature grows to a constant. For the second class, where the inflationary phase has a higher energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generically falls to zero with distance from the collision while the spatial curvature grows without bound. We find that the magnitude of the quadrupole and curvature grow with increasing centre of mass energy of the collision, and explore variations of the parameters in the scalar field lagrangian.

  15. Observable signatures of a classical transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Matthew C.; Lin, Wei, E-mail: mjohnson@perimeterinstitute.ca, E-mail: lewisweilin@gmail.com

    2016-03-01

    Eternal inflation arising from a potential landscape predicts that our universe is one realization of many possible cosmological histories. One way to access different cosmological histories is via the nucleation of bubble universes from a metastable false vacuum. Another way to sample different cosmological histories is via classical transitions, the creation of pocket universes through the collision between bubbles. Using relativistic numerical simulations, we examine the possibility of observationally determining if our observable universe resulted from a classical transition. We find that classical transitions produce spatially infinite, approximately open Friedman-Robertson-Walker universes. The leading set of observables in the aftermath ofmore » a classical transition are negative spatial curvature and a contribution to the Cosmic Microwave Background temperature quadrupole. The level of curvature and magnitude of the quadrupole are dependent on the position of the observer, and we determine the possible range of observables for two classes of single-scalar field models. For the first class, where the inflationary phase has a lower energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generally falls to zero with distance from the collision while the spatial curvature grows to a constant. For the second class, where the inflationary phase has a higher energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generically falls to zero with distance from the collision while the spatial curvature grows without bound. We find that the magnitude of the quadrupole and curvature grow with increasing centre of mass energy of the collision, and explore variations of the parameters in the scalar field lagrangian.« less

  16. Quantum illumination for enhanced detection of Rayleigh-fading targets

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent, Paul R.; Krogel, Jaron T.

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  18. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan

    Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less

  19. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  20. The Photon Shell Game and the Quantum von Neumann Architecture with Superconducting Circuits

    NASA Astrophysics Data System (ADS)

    Mariantoni, Matteo

    2012-02-01

    Superconducting quantum circuits have made significant advances over the past decade, allowing more complex and integrated circuits that perform with good fidelity. We have recently implemented a machine comprising seven quantum channels, with three superconducting resonators, two phase qubits, and two zeroing registers. I will explain the design and operation of this machine, first showing how a single microwave photon | 1 > can be prepared in one resonator and coherently transferred between the three resonators. I will also show how more exotic states such as double photon states | 2 > and superposition states | 0 >+ | 1 > can be shuffled among the resonators as well [1]. I will then demonstrate how this machine can be used as the quantum-mechanical analog of the von Neumann computer architecture, which for a classical computer comprises a central processing unit and a memory holding both instructions and data. The quantum version comprises a quantum central processing unit (quCPU) that exchanges data with a quantum random-access memory (quRAM) integrated on one chip, with instructions stored on a classical computer. I will also present a proof-of-concept demonstration of a code that involves all seven quantum elements: (1), Preparing an entangled state in the quCPU, (2), writing it to the quRAM, (3), preparing a second state in the quCPU, (4), zeroing it, and, (5), reading out the first state stored in the quRAM [2]. Finally, I will demonstrate that the quantum von Neumann machine provides one unit cell of a two-dimensional qubit-resonator array that can be used for surface code quantum computing. This will allow the realization of a scalable, fault-tolerant quantum processor with the most forgiving error rates to date. [4pt] [1] M. Mariantoni et al., Nature Physics 7, 287-293 (2011.)[0pt] [2] M. Mariantoni et al., Science 334, 61-65 (2011).

  1. Error correcting code with chip kill capability and power saving enhancement

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY

    2011-08-30

    A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.

  2. Hadron spectrum in quenched lattice QCD and distribution of zero modes

    NASA Astrophysics Data System (ADS)

    Iwasaki, Yoichi

    1989-06-01

    I report the results of the calculation of the hadron spectrum with the standard one-plaquette gauge action on a 16★★3★48 lattice at β=5.85 in the quenched lattice QCD. The result remarkably agrees with that of quark potential models for the case where the quark mass is equal to or is larger than the strange quark mass, even when one uses the standard one-plaquette gauge action. This is contrary to what is stated in the literature. We clarify the reason of the discrepancy, paying close attention to systematic errors in numerical calculations. Further, I show the distribution of zero modes of quark matrix, both in the cases of a RG improved gauge action and the standard action, and discuss the difference between the two cases.

  3. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  4. Algebraic Riccati equations in zero-sum differential games

    NASA Technical Reports Server (NTRS)

    Johnson, T. L.; Chao, A.

    1974-01-01

    The procedure for finding the closed-loop Nash equilibrium solution of two-player zero-sum linear time-invariant differential games with quadratic performance criteria and classical information pattern may be reduced in most cases to the solution of an algebraic Riccati equation. Based on the results obtained by Willems, necessary and sufficient conditions for existence of solutions to these equations are derived, and explicit conditions for a scalar example are given.

  5. Do semiclassical zero temperature black holes exist?

    PubMed

    Anderson, P R; Hiscock, W A; Taylor, B E

    2000-09-18

    The semiclassical Einstein equations are solved to first order in epsilon = Planck's over 2pi/M2 for the case of a Reissner-Nordström black hole perturbed by the vacuum stress energy of quantized free fields. Massless and massive fields of spin 0, 1/2, and 1 are considered. We show that in all physically realistic cases, macroscopic zero temperature black hole solutions do not exist. Any static zero temperature semiclassical black hole solutions must then be microscopic and isolated in the space of solutions; they do not join smoothly onto the classical extreme Reissner-Nordström solution as epsilon-->0.

  6. Cooling in reduced period optical lattices: Non-zero Raman detuning

    NASA Astrophysics Data System (ADS)

    Malinovsky, V. S.; Berman, P. R.

    2006-08-01

    In a previous paper [Phys. Rev. A 72 (2005) 033415], it was shown that sub-Doppler cooling occurs in a standing-wave Raman scheme (SWRS) that can lead to reduced period optical lattices. These calculations are extended to allow for non-zero detuning of the Raman transitions. New physical phenomena are encountered, including cooling to non-zero velocities, combinations of Sisyphus and "corkscrew" polarization cooling, and somewhat unusual origins of the friction force. The calculations are carried out in a semi-classical approximation and a dressed state picture is introduced to aid in the interpretation of the results.

  7. Adaptive control: Myths and realities

    NASA Technical Reports Server (NTRS)

    Athans, M.; Valavani, L.

    1984-01-01

    It was found that all currently existing globally stable adaptive algorithms have three basic properties in common: positive realness of the error equation, square-integrability of the parameter adjustment law and, need for sufficient excitation for asymptotic parameter convergence. Of the three, the first property is of primary importance since it satisfies a sufficient condition for stabillity of the overall system, which is a baseline design objective. The second property has been instrumental in the proof of asymptotic error convergence to zero, while the third addresses the issue of parameter convergence. Positive-real error dynamics can be generated only if the relative degree (excess of poles over zeroes) of the process to be controlled is known exactly; this, in turn, implies perfect modeling. This and other assumptions, such as absence of nonminimum phase plant zeros on which the mathematical arguments are based, do not necessarily reflect properties of real systems. As a result, it is natural to inquire what happens to the designs under less than ideal assumptions. The issues arising from violation of the exact modeling assumption which is extremely restrictive in practice and impacts the most important system property, stability, are discussed.

  8. Indirect learning control for nonlinear dynamical systems

    NASA Technical Reports Server (NTRS)

    Ryu, Yeong Soon; Longman, Richard W.

    1993-01-01

    In a previous paper, learning control algorithms were developed based on adaptive control ideas for linear time variant systems. The learning control methods were shown to have certain advantages over their adaptive control counterparts, such as the ability to produce zero tracking error in time varying systems, and the ability to eliminate repetitive disturbances. In recent years, certain adaptive control algorithms have been developed for multi-body dynamic systems such as robots, with global guaranteed convergence to zero tracking error for the nonlinear system euations. In this paper we study the relationship between such adaptive control methods designed for this specific class of nonlinear systems, and the learning control problem for such systems, seeking to converge to zero tracking error in following a specific command repeatedly, starting from the same initial conditions each time. The extension of these methods from the adaptive control problem to the learning control problem is seen to be trivial. The advantages and disadvantages of using learning control based on such adaptive control concepts for nonlinear systems, and the use of other currently available learning control algorithms are discussed.

  9. Correcting quantum errors with entanglement.

    PubMed

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  10. Prediction of final error level in learning and repetitive control

    NASA Astrophysics Data System (ADS)

    Levoci, Peter A.

    Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.

  11. Common errors of drug administration in infants: causes and avoidance.

    PubMed

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  12. Metric for evaluation of filter efficiency in spectral cameras.

    PubMed

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2016-11-10

    Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.

  13. A computer program to calculate zeroes, extrema, and interval integrals for the associated Legendre functions. [for estimation of bounds of truncation error in spherical harmonic expansion of geopotential

    NASA Technical Reports Server (NTRS)

    Payne, M. H.

    1973-01-01

    A computer program is described for the calculation of the zeroes of the associated Legendre functions, Pnm, and their derivatives, for the calculation of the extrema of Pnm and also the integral between pairs of successive zeroes. The program has been run for all n,m from (0,0) to (20,20) and selected cases beyond that for n up to 40. Up to (20,20), the program (written in double precision) retains nearly full accuracy, and indications are that up to (40,40) there is still sufficient precision (4-5 decimal digits for a 54-bit mantissa) for estimation of various bounds and errors involved in geopotential modelling, the purpose for which the program was written.

  14. Sampling command generator corrects for noise and dropouts in recorded data

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.

    1973-01-01

    Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.

  15. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  16. Zero tolerance prescribing: a strategy to reduce prescribing errors on the paediatric intensive care unit.

    PubMed

    Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark

    2012-11-01

    To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.

  17. Merging bottom-up and top-down precipitation products using a stochastic error model

    NASA Astrophysics Data System (ADS)

    Maggioni, Viviana; Massari, Christian; Brocca, Luca; Ciabatta, Luca

    2017-04-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season etc…). Recently, Brocca et al. (2014) have proposed an alternative approach (i.e., SM2RAIN) that allows to estimate rainfall from space by using satellite soil moisture observations. In contrast with classical satellite precipitation products which sense the cloud properties to retrieve the instantaneous precipitation, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite passes. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to improve current satellite rainfall estimates via appropriate integration between the products (i.e., SM2RAIN plus a classical satellite rainfall product). However, whether SM2RAIN is able or not to improve the performance of any state-of-the-art satellite rainfall product is much dependent upon an adequate quantification and characterization of the relative errors of the products. In this study, the stochastic rainfall error model SREM2D (Hossain et al. 2006) is used for characterizing the retrieval error of both SM2RAIN and a state-of-the-art satellite precipitation product (i.e., 3B42RT). The error characterization serves for an optimal integration between SM2RAIN and 3B42RT for enhancing the capability of the resulting integrated product (i.e. SM2RAIN+3B42RT) in operational hydrology. The study, conducted in Italy for a 5-yr period (2010-2014) using a dense network of raingauges (about 3000) as a benchmark, demonstrates that the integration is able to enhance the correlation and the root mean squared error of SM2RAIN+3B42RT with respect to the parent products. This suggests a potential benefit of merging SM2RAIN derived rainfall with state-of-the-art satellite precipitation estimates for creating a product characterized by higher accuracy and better performance when used in the contest of operational hydrology. REFERENCES 1. Brocca, L.; Ciabatta, L.; Massari, C.; Moramarco, T.; Hahn, S.; Hasenauer, S.; Kidd, R.; Dorigo, W.; Wagner, W.; Levizzani, V. Soil as a natural rain gauge: Estimating global rainfall from satellite soil moisture data. J. Geophys. Res. Atmos. 2014, 119, 5128-5141. 2. Hossain, F.; Anagnostou, E. N. A two-dimensional satellite rainfall error model. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1511-1522.

  18. Improving reflectance reconstruction from tristimulus values by adaptively combining colorimetric and reflectance similarities

    NASA Astrophysics Data System (ADS)

    Cao, Bin; Liao, Ningfang; Li, Yasheng; Cheng, Haobo

    2017-05-01

    The use of spectral reflectance as fundamental color information finds application in diverse fields related to imaging. Many approaches use training sets to train the algorithm used for color classification. In this context, we note that the modification of training sets obviously impacts the accuracy of reflectance reconstruction based on classical reflectance reconstruction methods. Different modifying criteria are not always consistent with each other, since they have different emphases; spectral reflectance similarity focuses on the deviation of reconstructed reflectance, whereas colorimetric similarity emphasizes human perception. We present a method to improve the accuracy of the reconstructed spectral reflectance by adaptively combining colorimetric and spectral reflectance similarities. The different exponential factors of the weighting coefficients were investigated. The spectral reflectance reconstructed by the proposed method exhibits considerable improvements in terms of the root-mean-square error and goodness-of-fit coefficient of the spectral reflectance errors as well as color differences under different illuminants. Our method is applicable to diverse areas such as textiles, printing, art, and other industries.

  19. Adaptive control of nonlinear uncertain active suspension systems with prescribed performance.

    PubMed

    Huang, Yingbo; Na, Jing; Wu, Xing; Liu, Xiaoqin; Guo, Yu

    2015-01-01

    This paper proposes adaptive control designs for vehicle active suspension systems with unknown nonlinear dynamics (e.g., nonlinear spring and piece-wise linear damper dynamics). An adaptive control is first proposed to stabilize the vertical vehicle displacement and thus to improve the ride comfort and to guarantee other suspension requirements (e.g., road holding and suspension space limitation) concerning the vehicle safety and mechanical constraints. An augmented neural network is developed to online compensate for the unknown nonlinearities, and a novel adaptive law is developed to estimate both NN weights and uncertain model parameters (e.g., sprung mass), where the parameter estimation error is used as a leakage term superimposed on the classical adaptations. To further improve the control performance and simplify the parameter tuning, a prescribed performance function (PPF) characterizing the error convergence rate, maximum overshoot and steady-state error is used to propose another adaptive control. The stability for the closed-loop system is proved and particular performance requirements are analyzed. Simulations are included to illustrate the effectiveness of the proposed control schemes. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Planetary Transmission Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.

    2004-01-01

    This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.

  1. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  2. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  3. Quantum friction on monoatomic layers and its classical analog

    NASA Astrophysics Data System (ADS)

    Maslovski, Stanislav I.; Silveirinha, Mário G.

    2013-07-01

    We consider the effect of quantum friction at zero absolute temperature resulting from polaritonic interactions in closely positioned two-dimensional arrays of polarizable atoms (e.g., graphene sheets) or thin dielectric sheets modeled as such arrays. The arrays move one with respect to another with a nonrelativistic velocity v≪c. We confirm that quantum friction is inevitably related to material dispersion, and that such friction vanishes in nondispersive media. In addition, we consider a classical analog of the quantum friction which allows us to establish a link between the phenomena of quantum friction and classical parametric generation. In particular, we demonstrate how the quasiparticle generation rate typically obtained from the quantum Fermi golden rule can be calculated classically.

  4. Rota-Baxter operators on sl (2,C) and solutions of the classical Yang-Baxter equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Jun, E-mail: peitsun@163.com; Bai, Chengming, E-mail: baicm@nankai.edu.cn; Guo, Li, E-mail: liguo@rutgers.edu

    2014-02-15

    We explicitly determine all Rota-Baxter operators (of weight zero) on sl (2,C) under the Cartan-Weyl basis. For the skew-symmetric operators, we give the corresponding skew-symmetric solutions of the classical Yang-Baxter equation in sl (2,C), confirming the related study by Semenov-Tian-Shansky. In general, these Rota-Baxter operators give a family of solutions of the classical Yang-Baxter equation in the six-dimensional Lie algebra sl (2,C)⋉{sub ad{sup *}} sl (2,C){sup *}. They also give rise to three-dimensional pre-Lie algebras which in turn yield solutions of the classical Yang-Baxter equation in other six-dimensional Lie algebras.

  5. Improving xylem hydraulic conductivity measurements by correcting the error caused by passive water uptake.

    PubMed

    Torres-Ruiz, José M; Sperry, John S; Fernández, José E

    2012-10-01

    Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.

  6. Assessing the Library Homepages of COPLAC Institutions for Section 508 Accessibility Errors: Who's Accessible, Who's Not, and How the Online WebXACT Assessment Tool Can Help

    ERIC Educational Resources Information Center

    Huprich, Julia; Green, Ravonne

    2007-01-01

    The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…

  7. Oxygen transport properties estimation by DSMC-CT simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro

    Coupling DSMC simulations with classical trajectories calculations is emerging as a powerful tool to improve predictive capabilities of computational rarefied gas dynamics. The considerable increase of computational effort outlined in the early application of the method (Koura,1997) can be compensated by running simulations on massively parallel computers. In particular, GPU acceleration has been found quite effective in reducing computing time (Ferrigni,2012; Norman et al.,2013) of DSMC-CT simulations. The aim of the present work is to study rarefied Oxygen flows by modeling binary collisions through an accurate potential energy surface, obtained by molecular beams scattering (Aquilanti, et al.,1999). The accuracy ofmore » the method is assessed by calculating molecular Oxygen shear viscosity and heat conductivity following three different DSMC-CT simulation methods. In the first one, transport properties are obtained from DSMC-CT simulations of spontaneous fluctuation of an equilibrium state (Bruno et al, Phys. Fluids, 23, 093104, 2011). In the second method, the collision trajectory calculation is incorporated in a Monte Carlo integration procedure to evaluate the Taxman’s expressions for the transport properties of polyatomic gases (Taxman,1959). In the third, non-equilibrium zero and one-dimensional rarefied gas dynamic simulations are adopted and the transport properties are computed from the non-equilibrium fluxes of momentum and energy. The three methods provide close values of the transport properties, their estimated statistical error not exceeding 3%. The experimental values are slightly underestimated, the percentage deviation being, again, few percent.« less

  8. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  9. Stochastic Evolution Equations Driven by Fractional Noises

    DTIC Science & Technology

    2016-11-28

    rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian

  10. The Birth and Death of Redundancy in Decoherence and Quantum Darwinism

    NASA Astrophysics Data System (ADS)

    Riedel, Charles; Zurek, Wojciech; Zwolak, Michael

    2012-02-01

    Understanding the quantum-classical transition and the identification of a preferred classical domain through quantum Darwinism is based on recognizing high-redundancy states as both ubiquitous and exceptional. They are produced ubiquitously during decoherence, as has been demonstrated by the recent identification of very general conditions under which high-redundancy states develop. They are exceptional in that high-redundancy states occupy a very narrow corner of the global Hilbert space; states selected at random are overwelming likely to exhibit zero redundancy. In this letter, we examine the conditions and time scales for the transition from high-redundancy states to zero-redundancy states in many-body dynamics. We identify sufficient condition for the development of redundancy from product states and show that the destruction of redundancy can be accomplished even with highly constrained interactions.

  11. [Establishment of model of traditional Chinese medicine injections post-marketing safety monitoring].

    PubMed

    Guo, Xin-E; Zhao, Yu-Bin; Xie, Yan-Ming; Zhao, Li-Cai; Li, Yan-Feng; Hao, Zhe

    2013-09-01

    To establish a nurse based post-marketing safety surveillance model for traditional Chinese medicine injections (TCMIs). A TCMIs safety monitoring team and a research hospital team engaged in the research, monitoring processes, and quality control processes were established, in order to achieve comprehensive, timely, accurate and real-time access to research data, to eliminate errors in data collection. A triage system involving a study nurse, as the first point of contact, clinicians and clinical pharmacists was set up in a TCM hospital. Following the specified workflow involving labeling of TCM injections and using improved monitoring forms it was found that there were no missing reports at the ratio of error was zero. A research nurse as the first and main point of contact in post-marketing safety monitoring of TCM as part of a triage model, ensures that research data collected has the characteristics of authenticity, accuracy, timeliness, integrity, and eliminate errors during the process of data collection. Hospital based monitoring is a robust and operable process.

  12. Design of fluidic self-assembly bonds for precise component positioning

    NASA Astrophysics Data System (ADS)

    Ramadoss, Vivek; Crane, Nathan B.

    2008-02-01

    Self Assembly is a promising alternative to conventional pick and place robotic assembly of micro components. Its benefits include parallel integration of parts with low equipment costs. Various approaches to self assembly have been demonstrated, yet demanding applications like assembly of micro-optical devices require increased positioning accuracy. This paper proposes a new method for design of self assembly bonds that addresses this need. Current methods have zero force at the desired assembly position and low stiffness. This allows small disturbance forces to create significant positioning errors. The proposed method uses a substrate assembly feature to provide a high accuracy alignment guide to the part. The capillary bond region of the part and substrate are then modified to create a non-zero positioning force to maintain the part in the desired assembly position. Capillary force models show that this force aligns the part to the substrate assembly feature and reduces sensitivity of part position to process variation. Thus, the new configuration can substantially improve positioning accuracy of capillary self-assembly. This will result in a dramatic decrease in positioning errors in the micro parts. Various binding site designs are analyzed and guidelines are proposed for the design of an effective assembly bond using this new approach.

  13. Modified locally weighted--partial least squares regression improving clinical predictions from infrared spectra of human serum samples.

    PubMed

    Perez-Guaita, David; Kuligowski, Julia; Quintás, Guillermo; Garrigues, Salvador; Guardia, Miguel de la

    2013-03-30

    Locally weighted partial least squares regression (LW-PLSR) has been applied to the determination of four clinical parameters in human serum samples (total protein, triglyceride, glucose and urea contents) by Fourier transform infrared (FTIR) spectroscopy. Classical LW-PLSR models were constructed using different spectral regions. For the selection of parameters by LW-PLSR modeling, a multi-parametric study was carried out employing the minimum root-mean square error of cross validation (RMSCV) as objective function. In order to overcome the effect of strong matrix interferences on the predictive accuracy of LW-PLSR models, this work focuses on sample selection. Accordingly, a novel strategy for the development of local models is proposed. It was based on the use of: (i) principal component analysis (PCA) performed on an analyte specific spectral region for identifying most similar sample spectra and (ii) partial least squares regression (PLSR) constructed using the whole spectrum. Results found by using this strategy were compared to those provided by PLSR using the same spectral intervals as for LW-PLSR. Prediction errors found by both, classical and modified LW-PLSR improved those obtained by PLSR. Hence, both proposed approaches were useful for the determination of analytes present in a complex matrix as in the case of human serum samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Structural zeros in high-dimensional data with applications to microbiome studies.

    PubMed

    Kaul, Abhishek; Davidov, Ori; Peddada, Shyamal D

    2017-07-01

    This paper is motivated by the recent interest in the analysis of high-dimensional microbiome data. A key feature of these data is the presence of "structural zeros" which are microbes missing from an observation vector due to an underlying biological process and not due to error in measurement. Typical notions of missingness are unable to model these structural zeros. We define a general framework which allows for structural zeros in the model and propose methods of estimating sparse high-dimensional covariance and precision matrices under this setup. We establish error bounds in the spectral and Frobenius norms for the proposed estimators and empirically verify them with a simulation study. The proposed methodology is illustrated by applying it to the global gut microbiome data of Yatsunenko and others (2012. Human gut microbiome viewed across age and geography. Nature 486, 222-227). Using our methodology we classify subjects according to the geographical location on the basis of their gut microbiome. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Aspects of Geodesical Motion with Fisher-Rao Metric: Classical and Quantum

    NASA Astrophysics Data System (ADS)

    Ciaglia, Florio M.; Cosmo, Fabio Di; Felice, Domenico; Mancini, Stefano; Marmo, Giuseppe; Pérez-Pardo, Juan M.

    The purpose of this paper is to exploit the geometric structure of quantum mechanics and of statistical manifolds to study the qualitative effect that the quantum properties have in the statistical description of a system. We show that the end points of geodesics in the classical setting coincide with the probability distributions that minimise Shannon’s entropy, i.e. with distributions of zero dispersion. In the quantum setting this happens only for particular initial conditions, which in turn correspond to classical submanifolds. This result can be interpreted as a geometric manifestation of the uncertainty principle.

  16. Quantum versus classical hyperfine-induced dynamics in a quantum dota)

    NASA Astrophysics Data System (ADS)

    Coish, W. A.; Loss, Daniel; Yuzbashyan, E. A.; Altshuler, B. L.

    2007-04-01

    In this article we analyze spin dynamics for electrons confined to semiconductor quantum dots due to the contact hyperfine interaction. We compare mean-field (classical) evolution of an electron spin in the presence of a nuclear field with the exact quantum evolution for the special case of uniform hyperfine coupling constants. We find that (in this special case) the zero-magnetic-field dynamics due to the mean-field approximation and quantum evolution are similar. However, in a finite magnetic field, the quantum and classical solutions agree only up to a certain time scale t <τc, after which they differ markedly.

  17. On Calculating the Zero-Gravity Surface Figure of a Mirror

    NASA Technical Reports Server (NTRS)

    Bloemhof, Eric E.

    2010-01-01

    An analysis of the classical method of calculating the zero-gravity surface figure of a mirror from surface-figure measurements in the presence of gravity has led to improved understanding of conditions under which the calculations are valid. In this method, one measures the surface figure in two or more gravity- reversed configurations, then calculates the zero-gravity surface figure as the average of the surface figures determined from these measurements. It is now understood that gravity reversal is not, by itself, sufficient to ensure validity of the calculations: It is also necessary to reverse mounting forces, for which purpose one must ensure that mountingfixture/ mirror contacts are located either at the same places or else sufficiently close to the same places in both gravity-reversed configurations. It is usually not practical to locate the contacts at the same places, raising the question of how close is sufficiently close. The criterion for sufficient closeness is embodied in the St. Venant principle, which, in the present context, translates to a requirement that the distance between corresponding gravity-reversed mounting positions be small in comparison to their distances to the optical surface of the mirror. The necessity of reversing mount forces is apparent in the behavior of the equations familiar from finite element analysis (FEA) that govern deformation of the mirror.

  18. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.

  19. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  20. Study on elevated-temperature flow behavior of Ni-Cr-Mo-B ultra-heavy-plate steel via experiment and modelling

    NASA Astrophysics Data System (ADS)

    Gao, Zhi-yu; Kang, Yu; Li, Yan-shuai; Meng, Chao; Pan, Tao

    2018-04-01

    Elevated-temperature flow behavior of a novel Ni-Cr-Mo-B ultra-heavy-plate steel was investigated by conducting hot compressive deformation tests on a Gleeble-3800 thermo-mechanical simulator at a temperature range of 1123 K–1423 K with a strain rate range from 0.01 s‑1 to10 s‑1 and a height reduction of 70%. Based on the experimental results, classic strain-compensated Arrhenius-type, a new revised strain-compensated Arrhenius-type and classic modified Johnson-Cook constitutive models were developed for predicting the high-temperature deformation behavior of the steel. The predictability of these models were comparatively evaluated in terms of statistical parameters including correlation coefficient (R), average absolute relative error (AARE), average root mean square error (RMSE), normalized mean bias error (NMBE) and relative error. The statistical results indicate that the new revised strain-compensated Arrhenius-type model could give prediction of elevated-temperature flow stress for the steel accurately under the entire process conditions. However, the predicted values by the classic modified Johnson-Cook model could not agree well with the experimental values, and the classic strain-compensated Arrhenius-type model could track the deformation behavior more accurately compared with the modified Johnson-Cook model, but less accurately with the new revised strain-compensated Arrhenius-type model. In addition, reasons of differences in predictability of these models were discussed in detail.

  1. The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory. Revision.

    DTIC Science & Technology

    1985-06-10

    The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse...Eigensolutions in Nonlinear Inverse Cavity-Flow Theory [Revised] Abstract: The method of Levi Civita is applied to an isolated fully cavitating body at...problem is not thought * to present much of a challenge at zero cavitation number. In this case, - the classical method of Levi Civita [7] can be

  2. Absolute metrology for space interferometers

    NASA Astrophysics Data System (ADS)

    Salvadé, Yves; Courteville, Alain; Dändliker, René

    2017-11-01

    The crucial issue of space-based interferometers is the laser interferometric metrology systems to monitor with very high accuracy optical path differences. Although classical high-resolution laser interferometers using a single wavelength are well developed, this type of incremental interferometer has a severe drawback: any interruption of the interferometer signal results in the loss of the zero reference, which requires a new calibration, starting at zero optical path difference. We propose in this paper an absolute metrology system based on multiplewavelength interferometry.

  3. Investigating the significance of zero-point motion in small molecular clusters of sulphuric acid and water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinson, Jake L.; Kathmann, Shawn M.; Ford, Ian J.

    2014-01-14

    The nucleation of particles from trace gases in the atmosphere is an important source of cloud condensation nuclei (CCN), and these are vital for the formation of clouds in view of the high supersaturations required for homogeneous water droplet nucleation. The methods of quantum chemistry have increasingly been employed to model nucleation due to their high accuracy and efficiency in calculating configurational energies; and nucleation rates can be obtained from the associated free energies of particle formation. However, even in such advanced approaches, it is typically assumed that the nuclei have a classical nature, which is questionable for some systems.more » The importance of zero-point motion (also known as quantum nuclear dynamics) in modelling small clusters of sulphuric acid and water is tested here using the path integral molecular dynamics (PIMD) method at the density functional theory (DFT) level of theory. We observe a small zero-point effect on the the equilibrium structures of certain clusters. One configuration is found to display a bimodal behaviour at 300 K in contrast to the stable ionised state suggested from a zero temperature classical geometry optimisation. The general effect of zero-point motion is to promote the extent of proton transfer with respect to classical behaviour. We thank Prof. Angelos Michaelides and his group in University College London (UCL) for practical advice and helpful discussions. This work benefited from interactions with the Thomas Young Centre through seminar and discussions involving the PIMD method. SMK was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. JLS and IJF were supported by the IMPACT scheme at UCL and by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. We are grateful for use of the UCL Legion High Performance Computing Facility and the resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy, Office of Science of the under Contract No. DE-AC02-05CH11231.« less

  4. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  5. Topics in quantum cryptography, quantum error correction, and channel simulation

    NASA Astrophysics Data System (ADS)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.

  6. Digital Filters for Digital Phase-locked Loops

    NASA Technical Reports Server (NTRS)

    Simon, M.; Mileant, A.

    1985-01-01

    An s/z hybrid model for a general phase locked loop is proposed. The impact of the loop filter on the stability, gain margin, noise equivalent bandwidth, steady state error and time response is investigated. A specific digital filter is selected which maximizes the overall gain margin of the loop. This filter can have any desired number of integrators. Three integrators are sufficient in order to track a phase jerk with zero steady state error at loop update instants. This filter has one zero near z = 1.0 for each integrator. The total number of poles of the filter is equal to the number of integrators plus two.

  7. The Preliminary Development of a Robotic Laser System Used for Ophthalmic Surgery

    DTIC Science & Technology

    1988-01-01

    proposed design, there is not sufficient computer time to ensure a zero probability of * error. But, what’s more important than a zero probability of...even zero proved to shorten the computation time. 4.3.6 The User Interface To put things in perspective, the step by step procedure for using the routine...was measured from the identified slice. The sectional area was measured using a Summa- graphic digitizing pad and the Sigma-scan program from Jandel

  8. Noise management to achieve superiority in quantum information systems

    NASA Astrophysics Data System (ADS)

    Nemoto, Kae; Devitt, Simon; Munro, William J.

    2017-06-01

    Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority. This article is part of the themed issue 'Quantum technology for the 21st century'.

  9. Noise management to achieve superiority in quantum information systems.

    PubMed

    Nemoto, Kae; Devitt, Simon; Munro, William J

    2017-08-06

    Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority.This article is part of the themed issue 'Quantum technology for the 21st century'. © 2017 The Author(s).

  10. Validation of accelerometer wear and nonwear time classification algorithm.

    PubMed

    Choi, Leena; Liu, Zhouwen; Matthews, Charles E; Buchowski, Maciej S

    2011-02-01

    the use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for the validation of subjective PA self-reports. A vital step in PA measurement is the classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. the purpose of this study was to validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. we conducted a validation study of a wear or nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. the recommended elements in the new algorithm are as follows: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero or nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the upstream or downstream 30-min consecutive zero-count window for detection of artifactual movements. Compared with the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P values < 0.001). the accelerometer wear or nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors.

  11. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  12. [Improvement of medical processes with Six Sigma - practicable zero-defect quality in preparation for surgery].

    PubMed

    Sobottka, Stephan B; Töpfer, Armin; Eberlein-Gonska, Maria; Schackert, Gabriele; Albrecht, D Michael

    2010-01-01

    Six Sigma is an innovative management- approach to reach practicable zero- defect quality in medical service processes. The Six Sigma principle utilizes strategies, which are based on quantitative measurements and which seek to optimize processes, limit deviations or dispersion from the target process. Hence, Six Sigma aims to eliminate errors or quality problems of all kinds. A pilot project to optimize the preparation for neurosurgery could now show that the Six Sigma method enhanced patient safety in medical care, while at the same time disturbances in the hospital processes and failure costs could be avoided. All six defined safety relevant quality indicators were significantly improved by changes in the workflow by using a standardized process- and patient- oriented approach. Certain defined quality standards such as a 100% complete surgical preparation at start of surgery and the required initial contact of the surgeon with the patient/ surgical record on the eve of surgery could be fulfilled within the range of practical zero- defect quality. Likewise, the degree of completion of the surgical record by 4 p.m. on the eve of surgery and their quality could be improved by a factor of 170 and 16, respectively, at sigma values of 4.43 and 4.38. The other two safety quality indicators "non-communicated changes in the OR- schedule" and the "completeness of the OR- schedule by 12:30 a.m. on the day before surgery" also show an impressive improvement by a factor of 2.8 and 7.7, respectively, corresponding with sigma values of 3.34 and 3.51. The results of this pilot project demonstrate that the Six Sigma method is eminently suitable for improving quality of medical processes. In our experience this methodology is suitable, even for complex clinical processes with a variety of stakeholders. In particular, in processes in which patient safety plays a key role, the objective of achieving a zero- defect quality is reasonable and should definitely be aspirated. Copyright © 2010. Published by Elsevier GmbH.

  13. Decoherence can relax cosmic acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markkanen, Tommi

    In this work we investigate the semi-classical backreaction for a quantised conformal scalar field and classical vacuum energy. In contrast to the usual approximation of a closed system, our analysis includes an environmental sector such that a quantum-to-classical transition can take place. We show that when the system decoheres into a mixed state with particle number as the classical observable de Sitter space is destabilized, which is observable as a gradually decreasing Hubble rate. In particular we show that at late times this mechanism can drive the curvature of the Universe to zero and has an interpretation as the decaymore » of the vacuum energy demonstrating that quantum effects can be relevant for the fate of the Universe.« less

  14. Treatment effects on insular and anterior cingulate cortex activation during classic and emotional Stroop interference in child abuse-related complex post-traumatic stress disorder.

    PubMed

    Thomaes, K; Dorrepaal, E; Draijer, N; de Ruiter, M B; Elzinga, B M; van Balkom, A J; Smit, J H; Veltman, D J

    2012-11-01

    Functional neuroimaging studies have shown increased Stroop interference coupled with altered anterior cingulate cortex (ACC) and insula activation in post-traumatic stress disorder (PTSD). These brain areas are associated with error detection and emotional arousal. There is some evidence that treatment can normalize these activation patterns. At baseline, we compared classic and emotional Stroop performance and blood oxygenation level-dependent responses (functional magnetic resonance imaging) of 29 child abuse-related complex PTSD patients with 22 non-trauma-exposed healthy controls. In 16 of these patients, we studied treatment effects of psycho-educational and cognitive behavioural stabilizing group treatment (experimental treatment; EXP) added to treatment as usual (TAU) versus TAU only, and correlations with clinical improvement. At baseline, complex PTSD patients showed a trend for increased left anterior insula and dorsal ACC activation in the classic Stroop task. Only EXP patients showed decreased dorsal ACC and left anterior insula activation after treatment. In the emotional Stroop contrasts, clinical improvement was associated with decreased dorsal ACC activation and decreased left anterior insula activation. We found further evidence that successful treatment in child abuse-related complex PTSD is associated with functional changes in the ACC and insula, which may be due to improved selective attention and lower emotional arousal, indicating greater cognitive control over PTSD symptoms.

  15. Counteracting estimation bias and social influence to improve the wisdom of crowds.

    PubMed

    Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D

    2018-04-01

    Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).

  16. Zero-point energy effects in anion solvation shells.

    PubMed

    Habershon, Scott

    2014-05-21

    By comparing classical and quantum-mechanical (path-integral-based) molecular simulations of solvated halide anions X(-) [X = F, Cl, Br and I], we identify an ion-specific quantum contribution to anion-water hydrogen-bond dynamics; this effect has not been identified in previous simulation studies. For anions such as fluoride, which strongly bind water molecules in the first solvation shell, quantum simulations exhibit hydrogen-bond dynamics nearly 40% faster than the corresponding classical results, whereas those anions which form a weakly bound solvation shell, such as iodide, exhibit a quantum effect of around 10%. This observation can be rationalized by considering the different zero-point energy (ZPE) of the water vibrational modes in the first solvation shell; for strongly binding anions, the ZPE of bound water molecules is larger, giving rise to faster dynamics in quantum simulations. These results are consistent with experimental investigations of anion-bound water vibrational and reorientational motion.

  17. The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory.

    DTIC Science & Technology

    1983-01-25

    ere, side if necessary and id.ntify hv hlock number) " The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation... Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse problem in which one...case, the classical method of Levi Civita [71 can be applied to an isolated •Numbers in square brackets indicate citations in the references listed below

  18. Luigi Gatteschi's work on asymptotics of special functions and their zeros

    NASA Astrophysics Data System (ADS)

    Gautschi, Walter; Giordano, Carla

    2008-12-01

    A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.

  19. Quantum key distribution with passive decoy state selection

    NASA Astrophysics Data System (ADS)

    Mauerer, Wolfgang; Silberhorn, Christine

    2007-05-01

    We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.

  20. Sampling Error in a Particulate Mixture: An Analytical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Kratochvil, Byron

    1980-01-01

    Presents an undergraduate experiment demonstrating sampling error. Selected as the sampling system is a mixture of potassium hydrogen phthalate and sucrose; using a self-zeroing, automatically refillable buret to minimize titration time of multiple samples and employing a dilute back-titrant to obtain high end-point precision. (CS)

  1. On axiomatizations of the Shapley value for bi-cooperative games

    NASA Astrophysics Data System (ADS)

    Meirong, Wu; Shaochen, Cao; Huazhen, Zhu

    2016-06-01

    There are three decisions available for each participant in bi-cooperative games which can depict real life accurately in this paper. This paper researches the Shapley value of bi-cooperative games and completes the unique characterization. The axiom similar to classical cooperative games which could be used to characterize the Shapley value of bi-cooperative games as well. Meanwhile, it introduces a structural axiom and a zero excluded axiom instead of effective axiom in classical cooperative games.

  2. Zero-Point Energy Constraint for Unimolecular Dissociation Reactions. Giving Trajectories Multiple Chances To Dissociate Correctly.

    PubMed

    Paul, Amit K; Hase, William L

    2016-01-28

    A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.

  3. Zero adjusted models with applications to analysing helminths count data.

    PubMed

    Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N

    2014-11-27

    It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.

  4. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

  5. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter

    PubMed Central

    Angrisani, Leopoldo; Simone, Domenico De

    2018-01-01

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input. PMID:29735956

  6. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter.

    PubMed

    Fontanella, Rita; Accardo, Domenico; Moriello, Rosario Schiano Lo; Angrisani, Leopoldo; Simone, Domenico De

    2018-05-07

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input.

  7. The Charge Transfer Efficiency and Calibration of WFPC2

    NASA Technical Reports Server (NTRS)

    Dolphin, Andrew E.

    2000-01-01

    A new determination of WFPC2 photometric corrections is presented, using HSTphot reduction of the WFPC2 Omega Centauri and NGC 2419 observations from January 1994 through March 2000 and a comparison with ground-based photometry. No evidence is seen for any position-independent photometric offsets (the "long-short anomaly"); all systematic errors appear to be corrected with the CTE and zero point solution. The CTE loss time dependence is determined to be very significant in the Y direction, causing time-independent CTE solutions to be valid only for a small range of times. On average, the present solution produces corrections similar to Whitmore, Heyer, & Casertano, although with an improved functional form that produces less scatter in the residuals and determined with roughly a year of additional data. In addition to the CTE loss characterization, zero point corrections are also determined as functions of chip, gain, filter, and temperature. Of interest, there are chip-to-chip differences of order 0.01 - 0.02 magnitudes relative to the Holtzman et al. calibrations, and the present study provides empirical zero point determinations for the non-standard filters such as the frequently-used F450W, F606W, and F702W.

  8. Contextual Advantage for State Discrimination

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.

    2018-02-01

    Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.

  9. Reexamination of Induction Heating of Primitive Bodies in Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Menzel, Raymond L.; Roberge, Wayne G.

    2013-10-01

    We reexamine the unipolar induction mechanism for heating asteroids originally proposed in a classic series of papers by Sonett and collaborators. As originally conceived, induction heating is caused by the "motional electric field" that appears in the frame of an asteroid immersed in a fully ionized, magnetized solar wind and drives currents through its interior. However, we point out that classical induction heating contains a subtle conceptual error, in consequence of which the electric field inside the asteroid was calculated incorrectly. The problem is that the motional electric field used by Sonett et al. is the electric field in the freely streaming plasma far from the asteroid; in fact, the motional field vanishes at the asteroid surface for realistic assumptions about the plasma density. In this paper we revisit and improve the induction heating scenario by (1) correcting the conceptual error by self-consistently calculating the electric field in and around the boundary layer at the asteroid-plasma interface; (2) considering weakly ionized plasmas consistent with current ideas about protoplanetary disks; and (3) considering more realistic scenarios that do not require a fully ionized, powerful T Tauri wind in the disk midplane. We present exemplary solutions for two highly idealized flows that show that the interior electric field can either vanish or be comparable to the fields predicted by classical induction depending on the flow geometry. We term the heating driven by these flows "electrodynamic heating," calculate its upper limits, and compare them to heating produced by short-lived radionuclides.

  10. The dissociation and recombination rates of CH4 through the Ni(111) surface: The effect of lattice motion

    NASA Astrophysics Data System (ADS)

    Wang, Wenji; Zhao, Yi

    2017-07-01

    Methane dissociation is a prototypical system for the study of surface reaction dynamics. The dissociation and recombination rates of CH4 through the Ni(111) surface are calculated by using the quantum instanton method with an analytical potential energy surface. The Ni(111) lattice is treated rigidly, classically, and quantum mechanically so as to reveal the effect of lattice motion. The results demonstrate that it is the lateral displacements rather than the upward and downward movements of the surface nickel atoms that affect the rates a lot. Compared with the rigid lattice, the classical relaxation of the lattice can increase the rates by lowering the free energy barriers. For instance, at 300 K, the dissociation and recombination rates with the classical lattice exceed the ones with the rigid lattice by 6 and 10 orders of magnitude, respectively. Compared with the classical lattice, the quantum delocalization rather than the zero-point energy of the Ni atoms further enhances the rates by widening the reaction path. For instance, the dissociation rate with the quantum lattice is about 10 times larger than that with the classical lattice at 300 K. On the rigid lattice, due to the zero-point energy difference between CH4 and CD4, the kinetic isotope effects are larger than 1 for the dissociation process, while they are smaller than 1 for the recombination process. The increasing kinetic isotope effect with decreasing temperature demonstrates that the quantum tunneling effect is remarkable for the dissociation process.

  11. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  12. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

    NASA Technical Reports Server (NTRS)

    Doremus, R. H.

    1982-01-01

    It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

  13. Generation of spiral bevel gears with conjugate tooth surfaces and tooth contact analysis

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsung, Wei-Jiung; Lee, Hong-Tao

    1987-01-01

    A new method for generation of spiral bevel gears is proposed. The main features of this method are as follows: (1) the gear tooth surfaces are conjugated and can transform rotation with zero transmission errors; (2) the tooth bearing contact is localized; (3) the center of the instantaneous contact ellipse moves in a plane that has a fixed orientation; (4) the contact normal performs in the process of meshing a parallel motion; (5) the motion of the contact ellipse provides improved conditions of lubrication; and (6) the gears can be manufactured by use of Gleason's equipment.

  14. 78 FR 12117 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... designed to protect investors and the public interest. Granting Market Makers more time to request a review... addresses errors in series with zero or no bid. Specifically, the Exchange proposes replacing reference to ``series quoted no bid on the Exchange'' with ``series where the NBBO bid is zero.'' This is being done to...

  15. 78 FR 12123 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... addresses errors in series with zero or no bid. Specifically, the Exchange proposes replacing reference to ``series quoted no bid on the Exchange'' with ``series where the NBBO bid is zero.'' This is being done to... Exchange proposes to amend the times in which certain ATP Holders are required to notify the Exchange in...

  16. A Hedonic Approach to Estimating Software Cost Using Ordinary Least Squares Regression and Nominal Attribute Variables

    DTIC Science & Technology

    2006-03-01

    included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable

  17. On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound

    NASA Astrophysics Data System (ADS)

    Li, Ruihu; Li, Xueliang; Guo, Luobin

    2015-12-01

    The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c

  18. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  19. Zero end-digit preference in recorded blood pressure and its impact on classification of patients for pharmacologic management in primary care — PREDICT-CVD–6

    PubMed Central

    Broad, Joanna; Wells, Sue; Marshall, Roger; Jackson, Rod

    2007-01-01

    Background Most blood pressure recordings end with a zero end-digit despite guidelines recommending measurement to the nearest 2 mmHg. The impact of rounding on management of cardiovascular disease (CVD) risk is unknown. Aim To document the use of rounding to zero end-digit and assess its potential impact on eligibility for pharmacologic management of CVD risk. Design of study Cross-sectional study. Setting A total of 23 676 patients having opportunistic CVD risk assessment in primary care practices in New Zealand. Method To simulate rounding in practice, for patients with systolic blood pressures recorded without a zero end-digit, a second blood pressure measure was generated by arithmetically rounding to the nearest zero end-digit. A 10-year Framingham CVD risk score was estimated using actual and rounded blood pressures. Eligibility for pharmacologic treatment was then determined using the Joint British Societies' JBS2 and the British Hypertension Society BHS–IV guidelines based on actual and rounded blood pressure values. Results Zero end-digits were recorded in 64% of systolic and 62% of diastolic blood pressures. When eligibility for drug treatment was based only on a Framingham 10-year CVD risk threshold of 20% or more, rounding misclassified one in 41 of all those patients subject to this error. Under the two guidelines which use different combinations of CVD risk and blood pressure thresholds, one in 19 would be misclassified under JBS2 and one in 12 under the BHS–IV guidelines mostly towards increased treatment. Conclusion Zero end-digit preference significantly increases a patient's likelihood of being classified as eligible for drug treatment. Guidelines that base treatment decisions primarily on absolute CVD risk are less susceptible to these errors. PMID:17976291

  20. Zero end-digit preference in recorded blood pressure and its impact on classification of patients for pharmacologic management in primary care - PREDICT-CVD-6.

    PubMed

    Broad, Joanna; Wells, Sue; Marshall, Roger; Jackson, Rod

    2007-11-01

    Most blood pressure recordings end with a zero end-digit despite guidelines recommending measurement to the nearest 2 mmHg. The impact of rounding on management of cardiovascular disease (CVD) risk is unknown. To document the use of rounding to zero end-digit and assess its potential impact on eligibility for pharmacologic management of CVD risk. Cross-sectional study. A total of 23,676 patients having opportunistic CVD risk assessment in primary care practices in New Zealand. To simulate rounding in practice, for patients with systolic blood pressures recorded without a zero end-digit, a second blood pressure measure was generated by arithmetically rounding to the nearest zero end-digit. A 10-year Framingham CVD risk score was estimated using actual and rounded blood pressures. Eligibility for pharmacologic treatment was then determined using the Joint British Societies' JBS2 and the British Hypertension Society BHS-IV guidelines based on actual and rounded blood pressure values. Zero end-digits were recorded in 64% of systolic and 62% of diastolic blood pressures. When eligibility for drug treatment was based only on a Framingham 10year CVD risk threshold of 20% or more, rounding misclassified one in 41 of all those patients subject to this error. Under the two guidelines which use different combinations of CVD risk and blood pressure thresholds, one in 19 would be misclassified under JBS2 and one in 12 under the BHS-IV guidelines mostly towards increased treatment. Zero end-digit preference significantly increases a patient's likelihood of being classified as eligible for drug treatment. Guidelines that base treatment decisions primarily on absolute CVD risk are less susceptible to these errors.

  1. Normal forms of Hopf-zero singularity

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.

  2. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  3. Impact of Measurement Error on Statistical Power: Review of an Old Paradox.

    ERIC Educational Resources Information Center

    Williams, Richard H.; And Others

    1995-01-01

    The paradox that a Student t-test based on pretest-posttest differences can attain its greatest power when the difference score reliability is zero was explained by demonstrating that power is not a mathematical function of reliability unless either true score variance or error score variance is constant. (SLD)

  4. Biases and Standard Errors of Standardized Regression Coefficients

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2011-01-01

    The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…

  5. Two-Way Communication with a Single Quantum Particle.

    PubMed

    Del Santo, Flavio; Dakić, Borivoje

    2018-02-09

    In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.

  6. Two-Way Communication with a Single Quantum Particle

    NASA Astrophysics Data System (ADS)

    Del Santo, Flavio; Dakić, Borivoje

    2018-02-01

    In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.

  7. An intervention to decrease patient identification band errors in a children's hospital.

    PubMed

    Hain, Paul D; Joers, B; Rush, M; Slayton, J; Throop, P; Hoagg, S; Allen, L; Grantham, J; Deshpande, J K

    2010-06-01

    Patient misidentification continues to be a quality and safety issue. There is a paucity of US data describing interventions to reduce identification band error rates. Monroe Carell Jr Children's Hospital at Vanderbilt. Percentage of patients with defective identification bands. Web-based surveys were sent, asking hospital personnel to anonymously identify perceived barriers to reaching zero defects with identification bands. Corrective action plans were created and implemented with ideas from leadership, front-line staff and the online survey. Data from unannounced audits of patient identification bands were plotted on statistical process control charts and shared monthly with staff. All hospital personnel were expected to "stop the line" if there were any patient identification questions. The first audit showed a defect rate of 20.4%. The original mean defect rate was 6.5%. After interventions and education, the new mean defect rate was 2.6%. (a) The initial rate of patient identification band errors in the hospital was higher than expected. (b) The action resulting in most significant improvement was staff awareness of the problem, with clear expectations to immediately stop the line if a patient identification error was present. (c) Staff surveys are an excellent source of suggestions for combating patient identification issues. (d) Continued audit and data collection is necessary for sustainable staff focus and continued improvement. (e) Statistical process control charts are both an effective method to track results and an easily understood tool for sharing data with staff.

  8. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  9. Rasch-family models are more valuable than score-based approaches for analysing longitudinal patient-reported outcomes with missing data.

    PubMed

    de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique

    2016-10-01

    The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.

  10. Comparison of different methods to compute a preliminary orbit of Space Debris using radar observations

    NASA Astrophysics Data System (ADS)

    Ma, Hélène; Gronchi, Giovanni F.

    2014-07-01

    We advertise a new method of preliminary orbit determination for space debris using radar observations, which we call Infang †. We can perform a linkage of two sets of four observations collected at close times. The context is characterized by the accuracy of the range ρ, whereas the right ascension α and the declination δ are much more inaccurate due to observational errors. This method can correct α, δ, assuming the exact knowledge of the range ρ. Considering no perturbations from the J 2 effect, but including errors in the observations, we can compare the new method, the classical method of Gibbs, and the more recent Keplerian integrals method. The development of Infang is still on-going and will be further improved and tested.

  11. Some conservative estimates in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2006-08-15

    Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.

  12. Nonadiabatic Molecular Dynamics and Orthogonality Constrained Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Shushkov, Philip Georgiev

    The exact quantum dynamics of realistic, multidimensional systems remains a formidable computational challenge. In many chemical processes, however, quantum effects such as tunneling, zero-point energy quantization, and nonadiabatic transitions play an important role. Therefore, approximate approaches that improve on the classical mechanical framework are of special practical interest. We propose a novel ring polymer surface hopping method for the calculation of chemical rate constants. The method blends two approaches, namely ring polymer molecular dynamics that accounts for tunneling and zero-point energy quantization, and surface hopping that incorporates nonadiabatic transitions. We test the method against exact quantum mechanical calculations for a one-dimensional, two-state model system. The method reproduces quite accurately the tunneling contribution to the rate and the distribution of reactants between the electronic states for this model system. Semiclassical instanton theory, an approach related to ring polymer molecular dynamics, accounts for tunneling by the use of periodic classical trajectories on the inverted potential energy surface. We study a model of electron transfer in solution, a chemical process where nonadiabatic events are prominent. By representing the tunneling electron with a ring polymer, we derive Marcus theory of electron transfer from semiclassical instanton theory after a careful analysis of the tunneling mode. We demonstrate that semiclassical instanton theory can recover the limit of Fermi's Golden Rule rate in a low-temperature, deep-tunneling regime. Mixed quantum-classical dynamics treats a few important degrees of freedom quantum mechanically, while classical mechanics describes affordably the rest of the system. But the interface of quantum and classical description is a challenging theoretical problem, especially for low-energy chemical processes. We therefore focus on the semiclassical limit of the coupled nuclear-electronic dynamics. We show that the time-dependent Schrodinger equation for the electrons employed in the widely used fewest switches surface hopping method is applicable only in the limit of nearly identical classical trajectories on the different potential energy surfaces. We propose a short-time decoupling algorithm that restricts the use of the Schrodinger equation only to the interaction regions. We test the short-time approximation on three model systems against exact quantum-mechanical calculations. The approximation improves the performance of the surface hopping approach. Nonadiabatic molecular dynamics simulations require the efficient and accurate computation of ground and excited state potential energy surfaces. Unlike the ground state calculations where standard methods exist, the computation of excited state properties is a challenging task. We employ time-independent density functional theory, in which the excited state energy is represented as a functional of the total density. We suggest an adiabatic-like approximation that simplifies the excited state exchange-correlation functional. We also derive a set of minimal conditions to impose exactly the orthogonality of the excited state Kohn-Sham determinant to the ground state determinant. This leads to an efficient, variational algorithm for the self-consistent optimization of the excited state energy. Finally, we assess the quality of the excitation energies obtained by the new method on a set of 28 organic molecules. The new approach provides results of similar accuracy to time-dependent density functional theory.

  13. Improvement of a picking algorithm real-time P-wave detection by kurtosis

    NASA Astrophysics Data System (ADS)

    Ishida, H.; Yamada, M.

    2016-12-01

    Earthquake early warning (EEW) requires fast and accurate P-wave detection. The current EEW system in Japan uses the STA/LTAalgorithm (Allen, 1978) to detect P-wave arrival.However, some stations did not trigger during the 2011 Great Tohoku Earthquake due to the emergent onset. In addition, accuracy of the P-wave detection is very important: on August 1, 2016, the EEW issued a false alarm with M9 in Tokyo region due to a thunder noise.To solve these problems, we use a P-wave detection method using kurtosis statistics. It detects the change of statistic distribution of the waveform amplitude. This method was recently developed (Saragiotis et al., 2002) and used for off-line analysis such as making seismic catalogs. To apply this method for EEW, we need to remove an acausal calculation and enable a real-time processing. Here, we propose a real-time P-wave detection method using kurtosis statistics with a noise filter.To avoid false triggering by a noise, we incorporated a simple filter to classify seismic signal and noise. Following Kong et al. (2016), we used the interquartilerange and zero cross rate for the classification. The interquartile range is an amplitude measure that is equal to the middle 50% of amplitude in a certain time window. The zero cross rate is a simple frequency measure that counts the number of times that the signal crosses baseline zero. A discriminant function including these measures was constructed by the linear discriminant analysis.To test this kurtosis method, we used strong motion records for 62 earthquakes between April, 2005 and July, 2015, which recorded the seismic intensity greater equal to 6 lower in the JMA intensity scale. The records with hypocentral distance < 200km were used for the analysis. An attached figure shows the error of P-wave detection speed for STA/LTA and kurtosis methods against manual picks. It shows that the median error is 0.13 sec and 0.035 sec for STA/LTA and kurtosis method. The kurtosis method tends to be more sensitive to small changes in amplitude.Our approach will contribute to improve the accuracy of source location determination of earthquakes and improve the shaking intensity estimation for an earthquake early warning.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less

  15. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  16. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  17. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  18. Double peak-induced distance error in short-time-Fourier-transform-Brillouin optical time domain reflectometers event detection and the recovery method.

    PubMed

    Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi

    2015-10-01

    The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.

  19. Small massless excitations against a nontrivial background

    NASA Astrophysics Data System (ADS)

    Khariton, N. G.; Svetovoy, V. B.

    1994-03-01

    We propose a systematic approach for finding bosonic zero modes of nontrivial classical solutions in a gauge theory. The method allows us to find all the modes connected with the broken space-time and gauge symmetries. The ground state is supposed to be dependent on some space coordinates yα and independent of the rest of the coordinates xi. The main problem which is solved is how to construct the zero modes corresponding to the broken xiyα rotations in vacuum and which boundary conditions specify them. It is found that the rotational modes are typically singular at the origin or at infinity, but their energy remains finite. They behave as massless vector fields in x space. We analyze local and global symmetries affecting the zero modes. An algorithm for constructing the zero mode excitations is formulated. The main results are illustrated in the Abelian Higgs model with the string background.

  20. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    PubMed

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  1. Accurate Singular Values and Differential QD Algorithms

    DTIC Science & Technology

    1992-07-01

    of the Cholesky Algorithm 5 4 The Quotient Difference Algorithm 8 5 Incorporation of Shifts 11 5.1 Shifted qd Algorithms...Effects of Finite Precision 18 7.1 Error Analysis - Overview ........ ........................... 18 7.2 High Relative Accuracy in the Presence of...showing that it was preferable to replace the DK zero-shift QR transform by two steps of zero-shift LR implemented in a qd (quotient- difference ) format

  2. The Rising Frequency of IT Blackouts Indicates the Increasing Relevance of IT Emergency Concepts to Ensure Patient Safety

    PubMed Central

    Sax, U.; Lipprandt, M.

    2016-01-01

    Summary Introduction As many medical workflows depend vastly on IT support, great demands are placed on the availability and accuracy of the applications involved. The cases of IT failure through ransomware at the beginning of 2016 are impressive examples of the dependence of clinical processes on IT. Although IT risk management attempts to reduce the risk of IT blackouts, the probability of partial/total data loss, or even worse, data falsification, is not zero. The objective of this paper is to present the state of the art with respect to strategies, processes, and governance to deal with the failure of IT systems. Methods This article is conducted as a narrative review. Results Worst case scenarios are needed, dealing with methods as to how to survive the downtime of clinical systems, for example through alternative workflows. These workflows have to be trained regularly. We categorize the most important types of IT system failure, assess the usefulness of classic counter measures, and state that most risk management approaches fall short on exactly this matter. Conclusion To ensure that continuous, evidence-based improvements to the recommendations for IT emergency concepts are made, it is essential that IT blackouts and IT disasters are reported, analyzed, and critically discussed. This requires changing from a culture of shame and blame to one of error and safety in healthcare IT. This change is finding its way into other disciplines in medicine. In addition, systematically planned and analyzed simulations of IT disaster may assist in IT emergency concept development. PMID:27830241

  3. The Rising Frequency of IT Blackouts Indicates the Increasing Relevance of IT Emergency Concepts to Ensure Patient Safety.

    PubMed

    Sax, Ulrich; Lipprandt, M; Röhrig, R

    2016-11-10

    As many medical workflows depend vastly on IT support, great demands are placed on the availability and accuracy of the applications involved. The cases of IT failure through ransomware at the beginning of 2016 are impressive examples of the dependence of clinical processes on IT. Although IT risk management attempts to reduce the risk of IT blackouts, the probability of partial/total data loss, or even worse, data falsification, is not zero. The objective of this paper is to present the state of the art with respect to strategies, processes, and governance to deal with the failure of IT systems. This article is conducted as a narrative review. Worst case scenarios are needed, dealing with methods as to how to survive the downtime of clinical systems, for example through alternative workflows. These workflows have to be trained regularly. We categorize the most important types of IT system failure, assess the usefulness of classic counter measures, and state that most risk management approaches fall short on exactly this matter. To ensure that continuous, evidence-based improvements to the recommendations for IT emergency concepts are made, it is essential that IT blackouts and IT disasters are reported, analyzed, and critically discussed. This requires changing from a culture of shame and blame to one of error and safety in healthcare IT. This change is finding its way into other disciplines in medicine. In addition, systematically planned and analyzed simulations of IT disaster may assist in IT emergency concept development.

  4. High resolution study of magnetic ordering at absolute zero.

    PubMed

    Lee, M; Husmann, A; Rosenbaum, T F; Aeppli, G

    2004-05-07

    High resolution pressure measurements in the zero-temperature limit provide a unique opportunity to study the behavior of strongly interacting, itinerant electrons with coupled spin and charge degrees of freedom. Approaching the precision that has become the hallmark of experiments on classical critical phenomena, we characterize the quantum critical behavior of the model, elemental antiferromagnet chromium, lightly doped with vanadium. We resolve the sharp doubling of the Hall coefficient at the quantum critical point and trace the dominating effects of quantum fluctuations up to surprisingly high temperatures.

  5. Containment control of networked autonomous underwater vehicles: A predictor-based neural DSC design.

    PubMed

    Peng, Zhouhua; Wang, Dan; Wang, Wei; Liu, Lu

    2015-11-01

    This paper investigates the containment control problem of networked autonomous underwater vehicles in the presence of model uncertainty and unknown ocean disturbances. A predictor-based neural dynamic surface control design method is presented to develop the distributed adaptive containment controllers, under which the trajectories of follower vehicles nearly converge to the dynamic convex hull spanned by multiple reference trajectories over a directed network. Prediction errors, rather than tracking errors, are used to update the neural adaptation laws, which are independent of the tracking error dynamics, resulting in two time-scales to govern the entire system. The stability property of the closed-loop network is established via Lyapunov analysis, and transient property is quantified in terms of L2 norms of the derivatives of neural weights, which are shown to be smaller than the classical neural dynamic surface control approach. Comparative studies are given to show the substantial improvements of the proposed new method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  7. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  8. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  9. Damage Initiation in Two-Dimensional, Woven, Carbon-Carbon Composites

    DTIC Science & Technology

    1988-12-01

    biaxial stress interaction were themselves a function of the applied biaxial stress ratio and thus the error in measuring F12 depended on F12. To find the...the supported directions. Discretizing the model will tend to induce error in the computed nodal displacements when compared to an exact continuum...solution, however, for an increasing number of elements in the structural model, the net error should converge to zero (3:94). The inherent flexibility in

  10. Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.

    PubMed

    Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C

    2015-03-01

    The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Quantum protocols within Spekkens' toy model

    NASA Astrophysics Data System (ADS)

    Disilvestro, Leonardo; Markham, Damian

    2017-05-01

    Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.

  12. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  13. How to Use Equipment Specifications to Predict Measurement Uncertainty An Example Using Tunnels A, B and C Data System

    DTIC Science & Technology

    2012-06-01

    µV RTI (relative to input) or ±1.0 mV RTO (relative to output). On a gain of 1,000, this is ±2.0 mV or ±0.02% FS and is a systematic error. Zero...Stability, Time: ±5 µV RTI, ±1.0 mV RTO . On a gain of 1,000, this is ±6 mV or ±0.06% FS. However, since the manufacturer’s specification is for 1-year...This is a random error assumed to be normally distributed. Zero Stability, Temperature: ±1 µV RTI, ±0.2 mV RTO /°C. On a gain of 1,000 and for a

  14. A zero-error operational video data compression system

    NASA Technical Reports Server (NTRS)

    Kutz, R. L.

    1973-01-01

    A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.

  15. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  16. Zero-point energy conservation in classical trajectory simulations: Application to H2CO

    NASA Astrophysics Data System (ADS)

    Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.

    2018-05-01

    A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.

  17. On-orbit assembly of a team of flexible spacecraft using potential field based method

    NASA Astrophysics Data System (ADS)

    Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping

    2017-04-01

    In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.

  18. Optical truss and retroreflector modeling for picometer laser metrology

    NASA Astrophysics Data System (ADS)

    Hines, Braden E.

    1993-09-01

    Space-based astrometric interferometer concepts typically have a requirement for the measurement of the internal dimensions of the instrument to accuracies in the picometer range. While this level of resolution has already been achieved for certain special types of laser gauges, techniques for picometer-level accuracy need to be developed to enable all the various kinds of laser gauges needed for space-based interferometers. Systematic errors due to retroreflector imperfections become important as soon as the retroreflector is allowed to either translate in position or articulate in angle away from its nominal zero-point. Also, when combining several laser interferometers to form a three-dimensional laser gauge (a laser optical truss), systematic errors due to imperfect knowledge of the truss geometry are important as the retroreflector translates away from its nominal zero-point. In order to assess the astrometric performance of a proposed instrument, it is necessary to determine how the effects of an imperfect laser metrology system impact the astrometric accuracy. This paper show the development of an error propagation model from errors in the 1-D metrology measurements through the impact on the overall astrometric accuracy for OSI. Simulations are then presented based on this development which were used to define a multiplier which determines the 1-D metrology accuracy required to produce a given amount of fringe position error.

  19. Uncertainty Propagation in an Ecosystem Nutrient Budget.

    EPA Science Inventory

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...

  20. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  1. On improvement to the Shock Propagation Model (SPM) applied to interplanetary shock transit time forecasting

    NASA Astrophysics Data System (ADS)

    Li, H. J.; Wei, F. S.; Feng, X. S.; Xie, Y. Q.

    2008-09-01

    This paper investigates methods to improve the predictions of Shock Arrival Time (SAT) of the original Shock Propagation Model (SPM). According to the classical blast wave theory adopted in the SPM, the shock propagating speed is determined by the total energy of the original explosion together with the background solar wind speed. Noting that there exists an intrinsic limit to the transit times computed by the SPM predictions for a specified ambient solar wind, we present a statistical analysis on the forecasting capability of the SPM using this intrinsic property. Two facts about SPM are found: (1) the error in shock energy estimation is not the only cause of the prediction errors and we should not expect that the accuracy of SPM to be improved drastically by an exact shock energy input; and (2) there are systematic differences in prediction results both for the strong shocks propagating into a slow ambient solar wind and for the weak shocks into a fast medium. Statistical analyses indicate the physical details of shock propagation and thus clearly point out directions of the future improvement of the SPM. A simple modification is presented here, which shows that there is room for improvement of SPM and thus that the original SPM is worthy of further development.

  2. REEXAMINATION OF INDUCTION HEATING OF PRIMITIVE BODIES IN PROTOPLANETARY DISKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menzel, Raymond L.; Roberge, Wayne G., E-mail: menzer@rpi.edu, E-mail: roberw@rpi.edu

    2013-10-20

    We reexamine the unipolar induction mechanism for heating asteroids originally proposed in a classic series of papers by Sonett and collaborators. As originally conceived, induction heating is caused by the 'motional electric field' that appears in the frame of an asteroid immersed in a fully ionized, magnetized solar wind and drives currents through its interior. However, we point out that classical induction heating contains a subtle conceptual error, in consequence of which the electric field inside the asteroid was calculated incorrectly. The problem is that the motional electric field used by Sonett et al. is the electric field in themore » freely streaming plasma far from the asteroid; in fact, the motional field vanishes at the asteroid surface for realistic assumptions about the plasma density. In this paper we revisit and improve the induction heating scenario by (1) correcting the conceptual error by self-consistently calculating the electric field in and around the boundary layer at the asteroid-plasma interface; (2) considering weakly ionized plasmas consistent with current ideas about protoplanetary disks; and (3) considering more realistic scenarios that do not require a fully ionized, powerful T Tauri wind in the disk midplane. We present exemplary solutions for two highly idealized flows that show that the interior electric field can either vanish or be comparable to the fields predicted by classical induction depending on the flow geometry. We term the heating driven by these flows 'electrodynamic heating', calculate its upper limits, and compare them to heating produced by short-lived radionuclides.« less

  3. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  4. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  5. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  6. Integrating classical and molecular approaches to evaluate the impact of nanosized zero-valent iron (nZVI) on soil organisms.

    PubMed

    Saccà, Maria Ludovica; Fajardo, Carmen; Costa, Gonzalo; Lobo, Carmen; Nande, Mar; Martin, Margarita

    2014-06-01

    Nanosized zero-valent iron (nZVI) is a new option for the remediation of contaminated soil and groundwater, but the effect of nZVI on soil biota is mostly unknown. In this work, nanotoxicological studies were performed in vitro and in two different standard soils to assess the effect of nZVI on autochthonous soil organisms by integrating classical and molecular analysis. Standardised ecotoxicity testing methods using Caenorhabditis elegans were applied in vitro and in soil experiments and changes in microbial biodiversity and biomarker gene expression were used to assess the responses of the microbial community to nZVI. The classical tests conducted in soil ruled out a toxic impact of nZVI on the soil nematode C. elegans in the test soils. The molecular analysis applied to soil microorganisms, however, revealed significant changes in the expression of the proposed biomarkers of exposure. These changes were related not only to the nZVI treatment but also to the soil characteristics, highlighting the importance of considering the soil matrix on a case by case basis. Furthermore, due to the temporal shift between transcriptional responses and the development of the corresponding phenotype, the molecular approach could anticipate adverse effects on environmental biota. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  8. Mixed quantum/classical investigation of the photodissociation of NH3(Ã) and a practical method for maintaining zero-point energy in classical trajectories

    NASA Astrophysics Data System (ADS)

    Bonhommeau, David; Truhlar, Donald G.

    2008-07-01

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  9. Mixed quantum/classical investigation of the photodissociation of NH3(A) and a practical method for maintaining zero-point energy in classical trajectories.

    PubMed

    Bonhommeau, David; Truhlar, Donald G

    2008-07-07

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  10. The Importance of Relying on the Manual: Scoring Error Variance in the WISC-IV Vocabulary Subtest

    ERIC Educational Resources Information Center

    Erdodi, Laszlo A.; Richard, David C. S.; Hopwood, Christopher

    2009-01-01

    Classical test theory assumes that ability level has no effect on measurement error. Newer test theories, however, argue that the precision of a measurement instrument changes as a function of the examinee's true score. Research has shown that administration errors are common in the Wechsler scales and that subtests requiring subjective scoring…

  11. Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.

    PubMed

    Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J

    2001-09-01

    In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.

  12. Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors.

    PubMed

    Patton, James L; Stoykov, Mary Ellen; Kovic, Mark; Mussa-Ivaldi, Ferdinando A

    2006-01-01

    This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate "adaptive training." Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable "after-effect." A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion--either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.

  13. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    PubMed Central

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406

  14. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  15. Semi-classical Reissner-Nordstrom model for the structure of charged leptons

    NASA Technical Reports Server (NTRS)

    Rosen, G.

    1980-01-01

    The lepton self-mass problem is examined within the framework of the quantum theory of electromagnetism and gravity. Consideration is given to the Reissner-Nordstrom solution to the Einstein-Maxwell classical field equations for an electrically charged mass point, and the WKB theory for a semiclassical system with total energy zero is used to obtain an expression for the Einstein-Maxwell action factor. The condition obtained is found to account for the observed mass values of the three charged leptons, and to be in agreement with the correspondence principle.

  16. Managing the spatial properties and photon correlations in squeezed non-classical twisted light

    NASA Astrophysics Data System (ADS)

    Zakharov, R. V.; Tikhonova, O. V.

    2018-05-01

    Spatial photon correlations and mode content of the squeezed vacuum light generated in a system of two separated nonlinear crystals is investigated. The contribution of both the polar and azimuthal modes with non-zero orbital angular momentum is analyzed. The control and engineering of the spatial properties and degree of entanglement of the non-classical squeezed light by changing the distance between crystals and pump parameters is demonstrated. Methods for amplification of certain spatial modes and managing the output mode content and intensity profile of quantum twisted light are suggested.

  17. The calculation of average error probability in a digital fibre optical communication system

    NASA Astrophysics Data System (ADS)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  18. Error Characterization of Flight Trajectories Reconstructed Using Structure from Motion

    DTIC Science & Technology

    2015-03-27

    adjustment using IMU rotation information, the accuracy of the yaw, pitch and roll is limited and numerical errors can be as high as 1e-4 depending on...due to either zero mean, Gaussian noise and/or bias in the IMU measured yaw, pitch and roll angles. It is possible that when errors in these...requires both the information on how the camera is mounted to the IMU /aircraft and the measured yaw, pitch and roll at the time of the first image

  19. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    PubMed Central

    Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-01-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  20. Word-Synchronous Optical Sampling of Periodically Repeated OTDM Data Words for True Waveform Visualization

    NASA Astrophysics Data System (ADS)

    Benkler, Erik; Telle, Harald R.

    2007-06-01

    An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.

  1. FIR digital filter-based ZCDPLL for carrier recovery

    NASA Astrophysics Data System (ADS)

    Nasir, Qassim

    2016-04-01

    The objective of this work is to analyse the performance of the newly proposed two-tap FIR digital filter-based first-order zero-crossing digital phase-locked loop (ZCDPLL) in the absence or presence of additive white Gaussian noise (AWGN). The introduction of the two-tap FIR digital filter widens the lock range of a ZCDPLL and improves the loop's operation in the presence of AWGN. The FIR digital filter tap coefficients affect the loop convergence behaviour and appropriate selection of those gains should be taken into consideration. The new proposed loop has wider locking range and faster acquisition time and reduces the phase error variations in the presence of noise.

  2. Evolutionary dynamics of a smoothed war of attrition game.

    PubMed

    Iyer, Swami; Killingback, Timothy

    2016-05-07

    In evolutionary game theory the War of Attrition game is intended to model animal contests which are decided by non-aggressive behavior, such as the length of time that a participant will persist in the contest. The classical War of Attrition game assumes that no errors are made in the implementation of an animal׳s strategy. However, it is inevitable in reality that such errors must sometimes occur. Here we introduce an extension of the classical War of Attrition game which includes the effect of errors in the implementation of an individual׳s strategy. This extension of the classical game has the important feature that the payoff is continuous, and as a consequence admits evolutionary behavior that is fundamentally different from that possible in the original game. We study the evolutionary dynamics of this new game in well-mixed populations both analytically using adaptive dynamics and through individual-based simulations, and show that there are a variety of possible outcomes, including simple monomorphic or dimorphic configurations which are evolutionarily stable and cannot occur in the classical War of Attrition game. In addition, we study the evolutionary dynamics of this extended game in a variety of spatially and socially structured populations, as represented by different complex network topologies, and show that similar outcomes can also occur in these situations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Position Tracking During Human Walking Using an Integrated Wearable Sensing System.

    PubMed

    Zizzo, Giulio; Ren, Lei

    2017-12-10

    Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.

  4. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  5. One-dimensional stitching interferometry assisted by a triple-beam interferometer

    DOE PAGES

    Xue, Junpeng; Huang, Lei; Gao, Bo; ...

    2017-04-13

    In this work, we proposed for stitching interferometry to use a triple-beam interferometer to measure both the distance and the tilt for all sub-apertures before the stitching process. The relative piston between two neighboring sub-apertures is then calculated by using the data in the overlapping area. Comparisons are made between our method, and the classical least-squares principle stitching method. Our method can improve the accuracy and repeatability of the classical stitching method when a large number of sub-aperture topographies are taken into account. Our simulations and experiments on flat and spherical mirrors indicate that our proposed method can decrease themore » influence of the interferometer error from the stitched result. The comparison of stitching system with Fizeau interferometry data is about 2 nm root mean squares and the repeatability is within ± 2.5 nm peak to valley.« less

  6. EXAMINING THE ACCURACY OF ASTROPHYSICAL DISK SIMULATIONS WITH A GENERALIZED HYDRODYNAMICAL TEST PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskin, Cody; Owen, J. Michael, E-mail: raskin1@llnl.gov, E-mail: mikeowen@llnl.gov

    2016-11-01

    We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension ofmore » SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less

  7. Identification method of laser gyro error model under changing physical field

    NASA Astrophysics Data System (ADS)

    Wang, Qingqing; Niu, Zhenzhong

    2018-04-01

    In this paper, the influence mechanism of temperature, temperature changing rate and temperature gradient on the inertial devices is studied. The two-order model of zero bias and the three-order model of the calibration factor of lster gyro under temperature variation are deduced. The calibration scheme of temperature error is designed, and the experiment is carried out. Two methods of stepwise regression analysis and BP neural network are used to identify the parameters of the temperature error model, and the effectiveness of the two methods is proved by the temperature error compensation.

  8. Further insights on the French WISC-IV factor structure through Bayesian structural equation modeling.

    PubMed

    Golay, Philippe; Reverte, Isabelle; Rossier, Jérôme; Favez, Nicolas; Lecerf, Thierry

    2013-06-01

    The interpretation of the Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all cross-loadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. Precise Determination of the Zero-Gravity Surface Figure of a Mirror without Gravity-Sag Modeling

    NASA Technical Reports Server (NTRS)

    Bloemhof, Eric E.; Lam, Jonathan C.; Feria, V. Alfonso; Chang, Zensheu

    2007-01-01

    The zero-gravity surface figure of optics used in spaceborne astronomical instruments must be known to high accuracy, but earthbound metrology is typically corrupted by gravity sag. Generally, inference of the zero-gravity surface figure from a measurement made under normal gravity requires finite-element analysis (FEA), and for accurate results the mount forces must be well characterized. We describe how to infer the zero-gravity surface figure very precisely using the alternative classical technique of averaging pairs of measurements made with the direction of gravity reversed. We show that mount forces as well as gravity must be reversed between the two measurements and discuss how the St. Venant principle determines when a reversed mount force may be considered to be applied at the same place in the two orientations. Our approach requires no finite-element modeling and no detailed knowledge of mount forces other than the fact that they reverse and are applied at the same point in each orientation. If mount schemes are suitably chosen, zero-gravity optical surfaces may be inferred much more simply and more accurately than with FEA.

  10. The motion near L{sub 4} equilibrium point under non-point mass primaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huda, I. N., E-mail: ibnu.nurul@students.itb.ac.id; Utama, J. A.; Madley, D.

    2015-09-30

    The Circular Restricted Three-Body Problem (CRTBP) possesses five equilibrium points, that comprise three collinear (L{sub 1}, L{sub 2}, and L{sub 3}) and two triangular points (L{sub 4} and L{sub 5}). The classical study (with the primaries are point mass) suggests that the equilibrium points may cause the velocity of infinitesimal object relatively becomes zero and reveals the zero velocity curve. We study the motion of infinitesimal object near triangular equilibrium point (L{sub 4}) and determine its zero velocity curve. We extend the study by taking into account the effects of radiation of the bigger primary (q{sub 1} ≠ 1, q{submore » 2} = 1) and oblateness of the smaller primary (A{sub 1} = 0, A{sub 2} ≠ 0). The location of L{sub 4} is analytically derived then the stability of L{sub 4} and its zero velocity curves are studied numerically. Our study suggests that the oblateness and the radiation of primaries may affect the stability and zero velocity curve around L{sub 4}.« less

  11. Testing Pattern Hypotheses for Correlation Matrices

    ERIC Educational Resources Information Center

    McDonald, Roderick P.

    1975-01-01

    The treatment of covariance matrices given by McDonald (1974) can be readily modified to cover hypotheses prescribing zeros and equalities in the correlation matrix rather than the covariance matrix, still with the convenience of the closed-form Least Squares solution and the classical Newton method. (Author/RC)

  12. The canonical equation of adaptive dynamics for life histories: from fitness-returns to selection gradients and Pontryagin's maximum principle.

    PubMed

    Metz, Johan A Jacob; Staňková, Kateřina; Johansson, Jacob

    2016-03-01

    This paper should be read as addendum to Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013). Our goal is, using little more than high-school calculus, to (1) exhibit the form of the canonical equation of adaptive dynamics for classical life history problems, where the examples in Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013) are chosen such that they avoid a number of the problems that one gets in this most relevant of applications, (2) derive the fitness gradient occurring in the CE from simple fitness return arguments, (3) show explicitly that setting said fitness gradient equal to zero results in the classical marginal value principle from evolutionary ecology, (4) show that the latter in turn is equivalent to Pontryagin's maximum principle, a well known equivalence that however in the literature is given either ex cathedra or is proven with more advanced tools, (5) connect the classical optimisation arguments of life history theory a little better to real biology (Mendelian populations with separate sexes subject to an environmental feedback loop), (6) make a minor improvement to the form of the CE for the examples in Dieckmann et al. and Parvinen et al.

  13. Context-sensitivity of the feedback-related negativity for zero-value feedback outcomes.

    PubMed

    Pfabigan, Daniela M; Seidel, Eva-Maria; Paul, Katharina; Grahl, Arvina; Sailer, Uta; Lanzenberger, Rupert; Windischberger, Christian; Lamm, Claus

    2015-01-01

    The present study investigated whether the same visual stimulus indicating zero-value feedback (€0) elicits feedback-related negativity (FRN) variation, depending on whether the outcomes correspond with expectations or not. Thirty-one volunteers performed a monetary incentive delay (MID) task while EEG was recorded. FRN amplitudes were comparable and more negative when zero-value outcome deviated from expectations than with expected gain or loss, supporting theories emphasising the impact of unexpectedness and salience on FRN amplitudes. Surprisingly, expected zero-value outcomes elicited the most negative FRNs. However, source localisation showed that such outcomes evoked less activation in cingulate areas than unexpected zero-value outcomes. Our study illustrates the context dependency of identical zero-value feedback stimuli. Moreover, the results indicate that the incentive cues in the MID task evoke different reward prediction error signals. These prediction signals differ in FRN amplitude and neuronal sources, and have to be considered in the design and interpretation of future studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  15. The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model.

    NASA Astrophysics Data System (ADS)

    Wan, S.; He, W.

    2016-12-01

    The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  16. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  17. Model reference adaptive control of flexible robots in the presence of sudden load changes

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory

    1991-01-01

    Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.

  18. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  19. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array

    PubMed Central

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan

    2018-01-01

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742

  20. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.

    PubMed

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan

    2018-05-05

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.

  1. Determination of rheological parameters of liquid crystals with zero anisotropy of diamagnetic susceptibility

    NASA Astrophysics Data System (ADS)

    Korotey, E. V.; Sinyavskii, N. Ya.

    2007-07-01

    A new method for determination of rheological parameters of liquid crystals with zero anisotropy of diamagnetic susceptibility is proposed, which is based on the measurement of the quadrupole splitting line of the NMR 2H spectrum. The method provides higher information content of the experiments, with the shear flow discarded from consideration, compared to that obtained by the classical Leslie-Ericksen theory. A comparison with the experiment is performed, the coefficients of anisotropic viscosity of lecithin/D2O/cyclohexane are determined, and a conclusion is drawn as concerns the domain shapes.

  2. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    PubMed

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  3. Detecting determinism with improved sensitivity in time series: Rank-based nonlinear predictability score

    NASA Astrophysics Data System (ADS)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  4. Color-motion feature-binding errors are mediated by a higher-order chromatic representation

    PubMed Central

    Shevell, Steven K.; Wang, Wei

    2017-01-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature 429, 262 (2004)]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A 31, A60 (2014)]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at every s level. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higherorder chromatic mechanism. PMID:26974945

  5. Classical and quantum filaments in the ground state of trapped dipolar Bose gases

    NASA Astrophysics Data System (ADS)

    Cinti, Fabio; Boninsegni, Massimo

    2017-07-01

    We study, by quantum Monte Carlo simulations, the ground state of a harmonically confined dipolar Bose gas with aligned dipole moments and with the inclusion of a repulsive two-body potential of varying range. Two different limits can clearly be identified, namely, a classical one in which the attractive part of the dipolar interaction dominates and the system forms an ordered array of parallel filaments and a quantum-mechanical one, wherein filaments are destabilized by zero-point motion, and eventually the ground state becomes a uniform cloud. The physical character of the system smoothly evolves from classical to quantum mechanical as the range of the repulsive two-body potential increases. An intermediate regime is observed in which ordered filaments are still present, albeit forming different structures from the ones predicted classically; quantum-mechanical exchanges of indistinguishable particles across different filaments allow phase coherence to be established, underlying a global superfluid response.

  6. Massive metrology using fast e-beam technology improves OPC model accuracy by >2x at faster turnaround time

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei

    2018-03-01

    Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.

  7. Large-Eddy Simulations of Atmospheric Flows Over Complex Terrain Using the Immersed-Boundary Method in the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Ma, Yulong; Liu, Heping

    2017-12-01

    Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.

  8. Exploring the importance of quantum effects in nucleation: The archetypical Nen case

    NASA Astrophysics Data System (ADS)

    Unn-Toc, Wesley; Halberstadt, Nadine; Meier, Christoph; Mella, Massimo

    2012-07-01

    The effect of quantum mechanics (QM) on the details of the nucleation process is explored employing Ne clusters as test cases due to their semi-quantal nature. In particular, we investigate the impact of quantum mechanics on both condensation and dissociation rates in the framework of the microcanonical ensemble. Using both classical trajectories and two semi-quantal approaches (zero point averaged dynamics, ZPAD, and Gaussian-based time dependent Hartree, G-TDH) to model cluster and collision dynamics, we simulate the dissociation and monomer capture for Ne8 as a function of the cluster internal energy, impact parameter and collision speed. The results for the capture probability Ps(b) as a function of the impact parameter suggest that classical trajectories always underestimate capture probabilities with respect to ZPAD, albeit at most by 15%-20% in the cases we studied. They also do so in some important situations when using G-TDH. More interestingly, dissociation rates kdiss are grossly overestimated by classical mechanics, at least by one order of magnitude. We interpret both behaviours as mainly due to the reduced amount of kinetic energy available to a quantum cluster for a chosen total internal energy. We also find that the decrease in monomer dissociation energy due to zero point energy effects plays a key role in defining dissociation rates. In fact, semi-quantal and classical results for kdiss seem to follow a common "corresponding states" behaviour when the proper definition of internal and dissociation energies are used in a transition state model estimation of the evaporation rate constants.

  9. Systematic Error Study for ALICE charged-jet v2 Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinz, M.; Soltz, R.

    We study the treatment of systematic errors in the determination of v 2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ 2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ 2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methodsmore » are equivalent.« less

  10. Determination of time zero from a charged particle detector

    DOEpatents

    Green, Jesse Andrew [Los Alamos, NM

    2011-03-15

    A method, system and computer program is used to determine a linear track having a good fit to a most likely or expected path of charged particle passing through a charged particle detector having a plurality of drift cells. Hit signals from the charged particle detector are associated with a particular charged particle track. An initial estimate of time zero is made from these hit signals and linear tracks are then fit to drift radii for each particular time-zero estimate. The linear track having the best fit is then searched and selected and errors in fit and tracking parameters computed. The use of large and expensive fast detectors needed to time zero in the charged particle detectors can be avoided by adopting this method and system.

  11. End-diastolic fractional flow reserve: comparison with conventional full-cardiac cycle fractional flow reserve.

    PubMed

    Chalyan, David A; Zhang, Zhang; Takarada, Shigeho; Molloi, Sabee

    2014-02-01

    Diastolic fractional flow reserve (dFFR) has been shown to be highly sensitive for detection of inducible myocardial ischemia. However, its reliance on measurement of left-ventricular pressure for zero-flow pressure correction, as well as manual extraction of the diastolic interval, has been its major limitation. Given previous reports of minimal zero-flow pressure at end-diastole, we compared instantaneous ECG-gated end-diastolic FFR with conventional full-cardiac cycle FFR and other diastolic indices in the porcine model. Measurements of FFR in the left anterior descending and left circumflex arteries were performed in an open-chest swine model with an external occluder device on the coronary artery used to produce varying degrees of epicardial stenosis. An ultrasound flow-probe that was placed proximal to the occluder measured absolute blood flow in ml/min, and it was used as a gold standard for FFR measurement. A total of 17 measurements at maximal hyperemia were acquired in 5 animals. Correlation coefficient between conventional mean hyperemic FFR with pressure-wire and directly measured FFR with flow-probe was 0.876 (standard error estimate=0.069; P<0.0001). The hyperemic end-diastolic FFR with pressure-wire correlated better with FFR measured directly with flow-probe (r=0.941, standard error estimate=0.050; P<0.0001). Instantaneous hyperemic ECG-gated FFR acquired at end-diastole, as compared with conventional full-cardiac cycle FFR, has an improved correlation with FFR measured directly with ultrasound flow-probe.

  12. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  13. Some Properties of Estimated Scale Invariant Covariance Structures.

    ERIC Educational Resources Information Center

    Dijkstra, T. K.

    1990-01-01

    An example of scale invariance is provided via the LISREL model that is subject only to classical normalizations and zero constraints on the parameters. Scale invariance implies that the estimated covariance matrix must satisfy certain equations, and the nature of these equations depends on the fitting function used. (TJH)

  14. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  15. Quantum error-correcting code for ternary logic

    NASA Astrophysics Data System (ADS)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  16. Asymptotics of quantum weighted Hurwitz numbers

    NASA Astrophysics Data System (ADS)

    Harnad, J.; Ortmann, Janosch

    2018-06-01

    This work concerns both the semiclassical and zero temperature asymptotics of quantum weighted double Hurwitz numbers. The partition function for quantum weighted double Hurwitz numbers can be interpreted in terms of the energy distribution of a quantum Bose gas with vanishing fugacity. We compute the leading semiclassical term of the partition function for three versions of the quantum weighted Hurwitz numbers, as well as lower order semiclassical corrections. The classical limit is shown to reproduce the simple single and double Hurwitz numbers studied by Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74). The KP-Toda τ-function that serves as generating function for the quantum Hurwitz numbers is shown to have the τ-function of Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74) as its leading term in the classical limit, and, with suitable scaling, the same holds for the partition function, the weights and expectations of Hurwitz numbers. We also compute the zero temperature limit of the partition function and quantum weighted Hurwitz numbers. The KP or Toda τ-function serving as generating function for the quantum Hurwitz numbers are shown to give the one for Belyi curves in the zero temperature limit and, with suitable scaling, the same holds true for the partition function, the weights and the expectations of Hurwitz numbers.

  17. Determining a Method of Enabling and Disabling the Integral Torque in the SDO Science and Inertial Mode Controllers

    NASA Technical Reports Server (NTRS)

    Vess, Melissa F.; Starin, Scott R.

    2007-01-01

    During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.

  18. On the maximum principle for complete second-order elliptic operators in general domains

    NASA Astrophysics Data System (ADS)

    Vitolo, Antonio

    This paper is concerned with the maximum principle for second-order linear elliptic equations in a wide generality. By means of a geometric condition previously stressed by Berestycki-Nirenberg-Varadhan, Cabré was very able to improve the classical ABP estimate obtaining the maximum principle also in unbounded domains, such as infinite strips and open connected cones with closure different from the whole space. Now we introduce a new geometric condition that extends the result to a more general class of domains including the complements of hypersurfaces, as for instance the cut plane. The methods developed here allow us to deal with complete second-order equations, where the admissible first-order term, forced to be zero in a preceding result with Cafagna, depends on the geometry of the domain.

  19. Mean deviation coupling synchronous control for multiple motors via second-order adaptive sliding mode control.

    PubMed

    Li, Lebao; Sun, Lingling; Zhang, Shengzhou

    2016-05-01

    A new mean deviation coupling synchronization control strategy is developed for multiple motor control systems, which can guarantee the synchronization performance of multiple motor control systems and reduce complexity of the control structure with the increasing number of motors. The mean deviation coupling synchronization control architecture combining second-order adaptive sliding mode control (SOASMC) approach is proposed, which can improve synchronization control precision of multiple motor control systems and make speed tracking errors, mean speed errors of each motor and speed synchronization errors converge to zero rapidly. The proposed control scheme is robustness to parameter variations and random external disturbances and can alleviate the chattering phenomena. Moreover, an adaptive law is employed to estimate the unknown bound of uncertainty, which is obtained in the sense of Lyapunov stability theorem to minimize the control effort. Performance comparisons with master-slave control, relative coupling control, ring coupling control, conventional PI control and SMC are investigated on a four-motor synchronization control system. Extensive comparative results are given to shown the good performance of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Integrating Six Sigma with total quality management: a case example for measuring medication errors.

    PubMed

    Revere, Lee; Black, Ken

    2003-01-01

    Six Sigma is a new management philosophy that seeks a nonexistent error rate. It is ripe for healthcare because many healthcare processes require a near-zero tolerance for mistakes. For most organizations, establishing a Six Sigma program requires significant resources and produces considerable stress. However, in healthcare, management can piggyback Six Sigma onto current total quality management (TQM) efforts so that minimal disruption occurs in the organization. Six Sigma is an extension of the Failure Mode and Effects Analysis that is required by JCAHO; it can easily be integrated into existing quality management efforts. Integrating Six Sigma into the existing TQM program facilitates process improvement through detailed data analysis. A drilled-down approach to root-cause analysis greatly enhances the existing TQM approach. Using the Six Sigma metrics, internal project comparisons facilitate resource allocation while external project comparisons allow for benchmarking. Thus, the application of Six Sigma makes TQM efforts more successful. This article presents a framework for including Six Sigma in an organization's TQM plan while providing a concrete example using medication errors. Using the process defined in this article, healthcare executives can integrate Six Sigma into all of their TQM projects.

  1. Driven topological systems in the classical limit

    NASA Astrophysics Data System (ADS)

    Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel

    2017-03-01

    Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.

  2. Wireless Authentication Protocol Implementation: Descriptions of a Zero-Knowledge Proof (ZKP) Protocol Implementation for Testing on Ground and Airborne Mobile Networks

    DTIC Science & Technology

    2015-01-01

    on AFRL’s small unmanned aerial vehicle (UAV) test bed . 15. SUBJECT TERMS Zero-Knowledge Proof Protocol Testing 16. SECURITY CLASSIFICATION OF...VERIFIER*** edition Version Information: Version 1.1.3 Version Details: Successful ZK authentication between two networked machines. Fixed a bug ...that causes intermittent bignum errors. Fixed a network hang bug and now allows continually authentication at the Verifier. Also now removing

  3. Solar Tracking Error Analysis of Fresnel Reflector

    PubMed Central

    Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie

    2014-01-01

    Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664

  4. Forward scattering in two-beam laser interferometry

    NASA Astrophysics Data System (ADS)

    Mana, G.; Massa, E.; Sasso, C. P.

    2018-04-01

    A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.

  5. Analysis of Medication Error Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Paul D.; Young, Jonathan; Santell, John

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison,more » and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.« less

  6. An optimized BP neural network based on genetic algorithm for static decoupling of a six-axis force/torque sensor

    NASA Astrophysics Data System (ADS)

    Fu, Liyue; Song, Aiguo

    2018-02-01

    In order to improve the measurement precision of 6-axis force/torque sensor for robot, BP decoupling algorithm optimized by GA (GA-BP algorithm) is proposed in this paper. The weights and thresholds of a BP neural network with 6-10-6 topology are optimized by GA to develop decouple a six-axis force/torque sensor. By comparison with other traditional decoupling algorithm, calculating the pseudo-inverse matrix of calibration and classical BP algorithm, the decoupling results validate the good decoupling performance of GA-BP algorithm and the coupling errors are reduced.

  7. Possible ergodic-nonergodic regions in the quantum Sherrington-Kirkpatrick spin glass model and quantum annealing

    NASA Astrophysics Data System (ADS)

    Mukherjee, Sudip; Rajak, Atanu; Chakrabarti, Bikas K.

    2018-02-01

    We explore the behavior of the order parameter distribution of the quantum Sherrington-Kirkpatrick model in the spin glass phase using Monte Carlo technique for the effective Suzuki-Trotter Hamiltonian at finite temperatures and that at zero temperature obtained using the exact diagonalization method. Our numerical results indicate the existence of a low- but finite-temperature quantum-fluctuation-dominated ergodic region along with the classical fluctuation-dominated high-temperature nonergodic region in the spin glass phase of the model. In the ergodic region, the order parameter distribution gets narrower around the most probable value of the order parameter as the system size increases. In the other region, the Parisi order distribution function has nonvanishing value everywhere in the thermodynamic limit, indicating nonergodicity. We also show that the average annealing time for convergence (to a low-energy level of the model, within a small error range) becomes system size independent for annealing down through the (quantum-fluctuation-dominated) ergodic region. It becomes strongly system size dependent for annealing through the nonergodic region. Possible finite-size scaling-type behavior for the extent of the ergodic region is also addressed.

  8. Risk management and measuring productivity with POAS--point of act system.

    PubMed

    Akiyama, Masanori; Kondo, Tatsuya

    2007-01-01

    The concept of our system is not only to manage material flows, but also to provide an integrated management resource, a means of correcting errors in medical treatment, and applications to EBM through the data mining of medical records. Prior to the development of this system, electronic processing systems in hospitals did a poor job of accurately grasping medical practice and medical material flows. With POAS (Point of Act System), hospital managers can solve the so-called, "man, money, material, and information" issues inherent in the costs of healthcare. The POAS system synchronizes with each department system, from finance and accounting, to pharmacy, to imaging, and allows information exchange. We can manage Man, Material, Money and Information completely by this system. Our analysis has shown that this system has a remarkable investment effect - saving over four million dollars per year - through cost savings in logistics and business process efficiencies. In addition, the quality of care has been improved dramatically while error rates have been reduced - nearly to zero in some cases.

  9. Investigation of homodyne demodulation of RZ-BPSK signal based on an optical Costas loop

    NASA Astrophysics Data System (ADS)

    Zhou, Haijun; Zhu, Zunzhen; Xie, Weilin; Dong, Yi

    2018-01-01

    We demonstrate the coherent detection of 10 Gb/s return-to-zero (RZ) binary phase-shift keying (BPSK) signal based on a homodyne Costas optical phase-locked loop (OPLL). It demonstrates time misalignment tolerance of +/- 10% of the transmitted RZ-BPSK signal, i.e. -20 to +20 ps between the pulse carver and the phase modulator for 5 Gb/s RZ-BPSK signal, -10 to +10 ps or 10 Gb/s RZ-BPSK signal. Besides, the Costas coherent receiver shows a 2.5 dB sensitivity improvement over conventional 5 Gb/s NRZ-BPSK and a 1.4 dB over 10 Gb/s NRZ-BPSK only at the cost of slightly higher residual phase error. Those merits of sufficient tolerance to misalignment, higher receiver sensitivity, and low residual phase error of RZ-BPSK modulation are beneficial to be applied in free space optical (FSO) communication to achieve higher link budget, longer transmission distance.

  10. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  11. Screening actuator locations for static shape control

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1990-01-01

    Correction of shape distortion due to zero-mean normally distributed errors in structural sizes which are random variables is examined. A bound on the maximum improvement in the expected value of the root-mean-square shape error is obtained. The shape correction associated with the optimal actuators is also characterized. An actuator effectiveness index is developed and shown to be helpful in screening actuator locations in the structure. The results are specialized to a simple form for truss structures composed of nominally identical members. The bound and effectiveness index are tested on a 55-m radiometer antenna truss structure. It is found that previously obtained results for optimum actuators had a performance close to the bound obtained here. Furthermore, the actuators associated with the optimum design are shown to have high effectiveness indices. Since only a small fraction of truss elements tend to have high effectiveness indices, the proposed screening procedure can greatly reduce the number of truss members that need to be considered as actuator sites.

  12. Multispectral optical telescope alignment testing for a cryogenic space environment

    NASA Astrophysics Data System (ADS)

    Newswander, Trent; Hooser, Preston; Champagne, James

    2016-09-01

    Multispectral space telescopes with visible to long wave infrared spectral bands provide difficult alignment challenges. The visible channels require precision in alignment and stability to provide good image quality in short wavelengths. This is most often accomplished by choosing materials with near zero thermal expansion glass or ceramic mirrors metered with carbon fiber reinforced polymer (CFRP) that are designed to have a matching thermal expansion. The IR channels are less sensitive to alignment but they often require cryogenic cooling for improved sensitivity with the reduced radiometric background. Finding efficient solutions to this difficult problem of maintaining good visible image quality at cryogenic temperatures has been explored with the building and testing of a telescope simulator. The telescope simulator is an onaxis ZERODUR® mirror, CFRP metered set of optics. Testing has been completed to accurately measure telescope optical element alignment and mirror figure changes in a cryogenic space simulated environment. Measured alignment error and mirror figure error test results are reported with a discussion of their impact on system optical performance.

  13. Striving for a zero-error patient surgical journey through adoption of aviation-style challenge and response flow checklists: a quality improvement project.

    PubMed

    Low, Daniel K; Reed, Mark A; Geiduschek, Jeremy M; Martin, Lynn D

    2013-07-01

    We describe our aim to create a zero-error system in our pediatric ambulatory surgery center by employing effective teamwork and aviation-style challenge and response 'flow checklists' at key stages of the patient surgical journey. These are used in addition to the existing World Health Organization Surgical Safety Checklists (Ann Surg, 255, 2012 and 44). Bellevue Surgery Center is a freestanding ambulatory surgery center affiliated with Seattle Children's Hospital, WA, USA. Approximately three thousand ambulatory surgeries are performed each year across a variety of surgical disciplines. Key points in the patient surgical journey were identified as high risk (different time points from the WHO safer surgery checklists). These were moments when the team, patient, and equipment have to been reconfigured to maximize patient safety. These points were departure from induction room, arrival in the operating room, departure from operating room, and arrival in the postanesthesia care unit. Traditionally, the anesthesiologist has memorized a list of 'do-not-forget items' for each of these stages. We recognized the potential for error to occur if the process was solely the responsibility of one individual and their memory. So we created 'flow checklists' executed by the team at every one of these high-risk points. We adopted a challenge and response system for these flow checklists as this is a tried and tested system widely used in aviation for critical tasks such as configuring an aircraft pretakeoff and prelanding. A staff survey with a 72% response rate (n = 29) showed that the team valued the checklists and thought they contributed to patient safety. To date, we have had zero incidence of omitting any of the 24 items listed on the four flow checklists. We have created a reproducible model of care involving multiple checklists at high-risk points in the patient surgical journey. The model is reliable and has a high degree of staff engagement. It promotes patient safety by ensuring the patient, team and equipment are correctly configured at every key transition stage in the surgical journey. We have been able to achieve this with no measurable increase in turnover times or reduction in operating room efficiency. © 2013 John Wiley & Sons Ltd.

  14. A fast hybrid algorithm combining regularized motion tracking and predictive search for reducing the occurrence of large displacement errors.

    PubMed

    Jiang, Jingfeng; Hall, Timothy J

    2011-04-01

    A hybrid approach that inherits both the robustness of the regularized motion tracking approach and the efficiency of the predictive search approach is reported. The basic idea is to use regularized speckle tracking to obtain high-quality seeds in an explorative search that can be used in the subsequent intelligent predictive search. The performance of the hybrid speckle-tracking algorithm was compared with three published speckle-tracking methods using in vivo breast lesion data. We found that the hybrid algorithm provided higher displacement quality metric values, lower root mean squared errors compared with a locally smoothed displacement field, and higher improvement ratios compared with the classic block-matching algorithm. On the basis of these comparisons, we concluded that the hybrid method can further enhance the accuracy of speckle tracking compared with its real-time counterparts, at the expense of slightly higher computational demands. © 2011 IEEE

  15. De sitter space and perpetuum mobile

    NASA Astrophysics Data System (ADS)

    Akhmedov, Emil T.; Buividovich, P. V.; Singleton, Douglas A.

    2012-04-01

    The general arguments that any interacting nonconformal classical field theory in de Sitter space leads to the possibility of constructing a perpetuum mobile is given. The arguments are based on the observation that massive free falling particles can radiate other massive particles on the classical level as seen by the free falling observer. The intensity of the radiation process is not zero even for particles with any finite mass, i.e., with a wavelength which is within causal domain. Hence, we conclude that either de Sitter space cannot exist eternally or that one can build a perpetuum mobile.

  16. De sitter space and perpetuum mobile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhmedov, Emil T.; Buividovich, P. V.; Singleton, Douglas A.

    2012-04-15

    The general arguments that any interacting nonconformal classical field theory in de Sitter space leads to the possibility of constructing a perpetuum mobile is given. The arguments are based on the observation that massive free falling particles can radiate other massive particles on the classical level as seen by the free falling observer. The intensity of the radiation process is not zero even for particles with any finite mass, i.e., with a wavelength which is within causal domain. Hence, we conclude that either de Sitter space cannot exist eternally or that one can build a perpetuum mobile.

  17. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  18. Linear quadratic stochastic control of atomic hydrogen masers.

    PubMed

    Koppang, P; Leland, R

    1999-01-01

    Data are given showing the results of using the linear quadratic Gaussian (LQG) technique to steer remote hydrogen masers to Coordinated Universal Time (UTC) as given by the United States Naval Observatory (USNO) via two-way satellite time transfer and the Global Positioning System (GPS). Data also are shown from the results of steering a hydrogen maser to the real-time USNO mean. A general overview of the theory behind the LQG technique also is given. The LQG control is a technique that uses Kalman filtering to estimate time and frequency errors used as input into a control calculation. A discrete frequency steer is calculated by minimizing a quadratic cost function that is dependent on both the time and frequency errors and the control effort. Different penalties, chosen by the designer, are assessed by the controller as the time and frequency errors and control effort vary from zero. With this feature, controllers can be designed to force the time and frequency differences between two standards to zero, either more or less aggressively depending on the application.

  19. Learning to classify in large committee machines

    NASA Astrophysics Data System (ADS)

    O'kane, Dominic; Winther, Ole

    1994-10-01

    The ability of a two-layer neural network to learn a specific non-linearly-separable classification task, the proximity problem, is investigated using a statistical mechanics approach. Both the tree and fully connected architectures are investigated in the limit where the number K of hidden units is large, but still much smaller than the number N of inputs. Both have continuous weights. Within the replica symmetric ansatz, we find that for zero temperature training, the tree architecture exhibits a strong overtraining effect. For nonzero temperature the asymptotic error is lowered, but it is still higher than the corresponding value for the simple perceptron. The fully connected architecture is considered for two regimes. First, for a finite number of examples we find a symmetry among the hidden units as each performs equally well. The asymptotic generalization error is finite, and minimal for T-->∞ where it goes to the same value as for the simple perceptron. For a large number of examples we find a continuous transition to a phase with broken hidden-unit symmetry, which has an asymptotic generalization error equal to zero.

  20. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  1. Quantum effects in amplitude death of coupled anharmonic self-oscillators

    NASA Astrophysics Data System (ADS)

    Amitai, Ehud; Koppenhöfer, Martin; Lörch, Niels; Bruder, Christoph

    2018-05-01

    Coupling two or more self-oscillating systems may stabilize their zero-amplitude rest state, therefore quenching their oscillation. This phenomenon is termed "amplitude death." Well known and studied in classical self-oscillators, amplitude death was only recently investigated in quantum self-oscillators [Ishibashi and Kanamoto, Phys. Rev. E 96, 052210 (2017), 10.1103/PhysRevE.96.052210]. Quantitative differences between the classical and quantum descriptions were found. Here, we demonstrate that for quantum self-oscillators with anharmonicity in their energy spectrum, multiple resonances in the mean phonon number can be observed. This is a result of the discrete energy spectrum of these oscillators, and is not present in the corresponding classical model. Experiments can be realized with current technology and would demonstrate these genuine quantum effects in the amplitude death phenomenon.

  2. Thermodynamics of finite systems: a key issues review

    NASA Astrophysics Data System (ADS)

    Swendsen, Robert H.

    2018-07-01

    A little over ten years ago, Campisi, and Dunkel and Hilbert, published papers claiming that the Gibbs (volume) entropy of a classical system was correct, and that the Boltzmann (surface) entropy was not. They claimed further that the quantum version of the Gibbs entropy was also correct, and that the phenomenon of negative temperatures was thermodynamically inconsistent. Their work began a vigorous debate of exactly how the entropy, both classical and quantum, should be defined. The debate has called into question the basis of thermodynamics, along with fundamental ideas such as whether heat always flows from hot to cold. The purpose of this paper is to sum up the present status—admittedly from my point of view. I will show that standard thermodynamics, with some minor generalizations, is correct, and the alternative thermodynamics suggested by Hilbert, Hänggi, and Dunkel is not. Heat does not flow from cold to hot. Negative temperatures are thermodynamically consistent. The small ‘errors’ in the Boltzmann entropy that started the whole debate are shown to be a consequence of the micro-canonical assumption of an energy distribution of zero width. Improved expressions for the entropy are found when this assumption is abandoned.

  3. The probability of false positives in zero-dimensional analyses of one-dimensional kinematic, force and EMG trajectories.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2016-06-14

    A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Identifying microturbulence regimes in a TCV discharge making use of physical constraints on particle and heat fluxes

    DOE PAGES

    Mariani, Alberto; Brunner, S.; Dominski, J.; ...

    2018-01-17

    Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less

  5. Identifying microturbulence regimes in a TCV discharge making use of physical constraints on particle and heat fluxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mariani, Alberto; Brunner, S.; Dominski, J.

    Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less

  6. Zero echo time MRI-only treatment planning for radiation therapy of brain tumors after resection.

    PubMed

    Boydev, C; Demol, B; Pasquier, D; Saint-Jalmes, H; Delpon, G; Reynaert, N

    2017-10-01

    Using magnetic resonance imaging (MRI) as the sole imaging modality for patient modeling in radiation therapy (RT) is a challenging task due to the need to derive electron density information from MRI and construct a so-called pseudo-computed tomography (pCT) image. We have previously published a new method to derive pCT images from head T1-weighted (T1-w) MR images using a single-atlas propagation scheme followed by a post hoc correction of the mapped CT numbers using local intensity information. The purpose of this study was to investigate the performance of our method with head zero echo time (ZTE) MR images. To evaluate results, the mean absolute error in bins of 20 HU was calculated with respect to the true planning CT scan of the patient. We demonstrated that applying our method using ZTE MR images instead of T1-w improved the correctness of the pCT in case of bone resection surgery prior to RT (that is, an example of large anatomical difference between the atlas and the patient). Copyright © 2017. Published by Elsevier Ltd.

  7. Central Procurement Workload Projection Model

    DTIC Science & Technology

    1981-02-01

    generated by the P&P Directorates such as procurement actions (PA’s) are pursued. Specifi- cally, Box-Jenkins Autoregressive Integrated Moving Average...Breakout of PA’s to over and under $10,000 23 IV. FINDINGS AND RECOMMENDATIONS 24 A. General 24 B. Findings 24 C. Recommendations 25...the model will predict the actual values and hence the error will be zero . Therefore, after forecasting 3 quarters into the future no error

  8. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  9. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  10. Iodine-filter-based mobile Doppler lidar to make continuous and full-azimuth-scanned wind measurements: data acquisition and analysis system, data retrieval methods, and error analysis.

    PubMed

    Wang, Zhangjun; Liu, Zhishen; Liu, Liping; Wu, Songhua; Liu, Bingyi; Li, Zhigang; Chu, Xinzhao

    2010-12-20

    An incoherent Doppler wind lidar based on iodine edge filters has been developed at the Ocean University of China for remote measurements of atmospheric wind fields. The lidar is compact enough to fit in a minivan for mobile deployment. With its sophisticated and user-friendly data acquisition and analysis system (DAAS), this lidar has made a variety of line-of-sight (LOS) wind measurements in different operational modes. Through carefully developed data retrieval procedures, various wind products are provided by the lidar, including wind profile, LOS wind velocities in plan position indicator (PPI) and range height indicator (RHI) modes, and sea surface wind. Data are processed and displayed in real time, and continuous wind measurements have been demonstrated for as many as 16 days. Full-azimuth-scanned wind measurements in PPI mode and full-elevation-scanned wind measurements in RHI mode have been achieved with this lidar. The detection range of LOS wind velocity PPI and RHI reaches 8-10 km at night and 6-8 km during daytime with range resolution of 10 m and temporal resolution of 3 min. In this paper, we introduce the DAAS architecture and describe the data retrieval methods for various operation modes. We present the measurement procedures and results of LOS wind velocities in PPI and RHI scans along with wind profiles obtained by Doppler beam swing. The sea surface wind measured for the sailing competition during the 2008 Beijing Olympics is also presented. The precision and accuracy of wind measurements are estimated through analysis of the random errors associated with photon noise and the systematic errors introduced by the assumptions made in data retrieval. The three assumptions of horizontal homogeneity of atmosphere, close-to-zero vertical wind, and uniform sensitivity are made in order to experimentally determine the zero wind ratio and the measurement sensitivity, which are important factors in LOS wind retrieval. Deviations may occur under certain meteorological conditions, leading to bias in these situations. Based on the error analyses and measurement results, we point out the application ranges of this Doppler lidar and propose several paths for future improvement.

  11. A History of the Improvement of Internet Protocols Over Satellites Using ACTS

    NASA Technical Reports Server (NTRS)

    Allman, Mark; Kruse, Hans; Ostermann, Shawn

    2000-01-01

    This paper outlines the main results of a number of ACTS experiments on the efficacy of using standard Internet protocols over long-delay satellite channels. These experiments have been jointly conducted by NASAs Glenn Research Center and Ohio University over the last six years. The focus of our investigations has been the impact of long-delay networks with non-zero bit-error rates on the performance of the suite of Internet protocols. In particular, we have focused on the most widely used transport protocol, the Transmission Control Protocol (TCP), as well as several application layer protocols. This paper presents our main results, as well as references to more verbose discussions of our experiments.

  12. Digital scrambling for shuttle communication links: Do drawbacks outweigh advantages?

    NASA Technical Reports Server (NTRS)

    Dessouky, K.

    1985-01-01

    Digital data scrambling has been considered for communication systems using NRZ (non-return to zero) symbol formats. The purpose is to increase the number of transitions in the data to improve the performance of the symbol synchronizer. This is accomplished without expanding the bandwidth but at the expense of increasing the data bit error rate (BER). Models for the scramblers/descramblers of practical interest are presented together with the appropriate link model. The effects of scrambling on the performance of coded and uncoded links are studied. The results are illustrated by application to the Tracking and Data Relay Satellite System links. Conclusions regarding the usefulness of scrambling are also given.

  13. Quantum groups, Yang-Baxter maps and quasi-determinants

    NASA Astrophysics Data System (ADS)

    Tsuboi, Zengo

    2018-01-01

    For any quasi-triangular Hopf algebra, there exists the universal R-matrix, which satisfies the Yang-Baxter equation. It is known that the adjoint action of the universal R-matrix on the elements of the tensor square of the algebra constitutes a quantum Yang-Baxter map, which satisfies the set-theoretic Yang-Baxter equation. The map has a zero curvature representation among L-operators defined as images of the universal R-matrix. We find that the zero curvature representation can be solved by the Gauss decomposition of a product of L-operators. Thereby obtained a quasi-determinant expression of the quantum Yang-Baxter map associated with the quantum algebra Uq (gl (n)). Moreover, the map is identified with products of quasi-Plücker coordinates over a matrix composed of the L-operators. We also consider the quasi-classical limit, where the underlying quantum algebra reduces to a Poisson algebra. The quasi-determinant expression of the quantum Yang-Baxter map reduces to ratios of determinants, which give a new expression of a classical Yang-Baxter map.

  14. Indeterminism in Classical Dynamics of Particle Motion

    NASA Astrophysics Data System (ADS)

    Eyink, Gregory; Vishniac, Ethan; Lalescu, Cristian; Aluie, Hussein; Kanov, Kalin; Burns, Randal; Meneveau, Charles; Szalay, Alex

    2013-03-01

    We show that ``God plays dice'' not only in quantum mechanics but also in the classical dynamics of particles advected by turbulent fluids. With a fixed deterministic flow velocity and an exactly known initial position, the particle motion is nevertheless completely unpredictable! In analogy with spontaneous magnetization in ferromagnets which persists as external field is taken to zero, the particle trajectories in turbulent flow remain random as external noise vanishes. The necessary ingredient is a rough advecting field with a power-law energy spectrum extending to smaller scales as noise is taken to zero. The physical mechanism of ``spontaneous stochasticity'' is the explosive dispersion of particle pairs proposed by L. F. Richardson in 1926, so the phenomenon should be observable in laboratory and natural turbulent flows. We present here the first empirical corroboration of these effects in high Reynolds-number numerical simulations of hydrodynamic and magnetohydrodynamic fluid turbulence. Since power-law spectra are seen in many other systems in condensed matter, geophysics and astrophysics, the phenomenon should occur rather widely. Fast reconnection in solar flares and other astrophysical systems can be explained by spontaneous stochasticity of magnetic field-line motion

  15. Computation of solar perturbations with Poisson series

    NASA Technical Reports Server (NTRS)

    Broucke, R.

    1974-01-01

    Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

  16. Color-motion feature-binding errors are mediated by a higher-order chromatic representation.

    PubMed

    Shevell, Steven K; Wang, Wei

    2016-03-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.

  17. High-Threshold Low-Overhead Fault-Tolerant Classical Computation and the Replacement of Measurements with Unitary Quantum Gates.

    PubMed

    Cruikshank, Benjamin; Jacobs, Kurt

    2017-07-21

    von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.

  18. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick

    2014-09-15

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less

  19. A Matched Filter Technique for Slow Radio Transient Detection and First Demonstration with the Murchison Widefield Array

    NASA Astrophysics Data System (ADS)

    Feng, L.; Vaulin, R.; Hewitt, J. N.; Remillard, R.; Kaplan, D. L.; Murphy, Tara; Kudryavtseva, N.; Hancock, P.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Lonsdale, C. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.

    2017-03-01

    Many astronomical sources produce transient phenomena at radio frequencies, but the transient sky at low frequencies (<300 MHz) remains relatively unexplored. Blind surveys with new wide-field radio instruments are setting increasingly stringent limits on the transient surface density on various timescales. Although many of these instruments are limited by classical confusion noise from an ensemble of faint, unresolved sources, one can in principle detect transients below the classical confusion limit to the extent that the classical confusion noise is independent of time. We develop a technique for detecting radio transients that is based on temporal matched filters applied directly to time series of images, rather than relying on source-finding algorithms applied to individual images. This technique has well-defined statistical properties and is applicable to variable and transient searches for both confusion-limited and non-confusion-limited instruments. Using the Murchison Widefield Array as an example, we demonstrate that the technique works well on real data despite the presence of classical confusion noise, sidelobe confusion noise, and other systematic errors. We searched for transients lasting between 2 minutes and 3 months. We found no transients and set improved upper limits on the transient surface density at 182 MHz for flux densities between ˜20 and 200 mJy, providing the best limits to date for hour- and month-long transients.

  20. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  1. Dynamically biased statistical model for the ortho/para conversion in the H2 + H3+ → H3+ + H2 reaction.

    PubMed

    Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio

    2012-09-07

    In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.

  2. Dynamically biased statistical model for the ortho/para conversion in the H2+H3+ --> H3++ H2 reaction

    NASA Astrophysics Data System (ADS)

    Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio

    2012-09-01

    In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.

  3. Correction of Thermal Gradient Errors in Stem Thermocouple Hygrometers

    PubMed Central

    Michel, Burlyn E.

    1979-01-01

    Stem thermocouple hygrometers were subjected to transient and stable thermal gradients while in contact with reference solutions of NaCl. Both dew point and psychrometric voltages were directly related to zero offset voltages, the latter reflecting the size of the thermal gradient. Although slopes were affected by absolute temperature, they were not affected by water potential. One hygrometer required a correction of 1.75 bars water potential per microvolt of zero offset, a value that was constant from 20 to 30 C. PMID:16660685

  4. Reliability of a Longitudinal Sequence of Scale Ratings

    ERIC Educational Resources Information Center

    Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony

    2009-01-01

    Reliability captures the influence of error on a measurement and, in the classical setting, is defined as one minus the ratio of the error variance to the total variance. Laenen, Alonso, and Molenberghs ("Psychometrika" 73:443-448, 2007) proposed an axiomatic definition of reliability and introduced the R[subscript T] coefficient, a measure of…

  5. An Extremely Low Mid-infrared Extinction Law toward the Galactic Center and 4% Distance Precision to 55 Classical Cepheids

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodian; Wang, Shu; Deng, Licai; de Grijs, Richard

    2018-06-01

    Distances and extinction values are usually degenerate. To refine the distance to the general Galactic Center region, a carefully determined extinction law (taking into account the prevailing systematic errors) is urgently needed. We collected data for 55 classical Cepheids projected toward the Galactic Center region to derive the near- to mid-infrared extinction law using three different approaches. The relative extinction values obtained are {A}J/{A}{K{{s}}}=3.005,{A}H/{A}{K{{s}}}=1.717, {A}[3.6]/{A}{K{{s}}}=0.478,{A}[4.5]/{A}{K{{s}}}=0.341, {A}[5.8]/{A}{K{{s}}}=0.234,{A}[8.0]/{A}{K{{s}}} =0.321,{A}W1/{A}{K{{s}}}=0.506, and {A}W2/{A}{K{{s}}}=0.340. We also calculated the corresponding systematic errors. Compared with previous work, we report an extremely low and steep mid-infrared extinction law. Using a seven-passband “optimal distance” method, we improve the mean distance precision to our sample of 55 Cepheids to 4%. Based on four confirmed Galactic Center Cepheids, a solar Galactocentric distance of R 0 = 8.10 ± 0.19 ± 0.22 kpc is determined, featuring an uncertainty that is close to the limiting distance accuracy (2.8%) for Galactic Center Cepheids.

  6. System selects framing rate for spectrograph camera

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.

  7. Interferometric rotation sensor

    NASA Technical Reports Server (NTRS)

    Walsh, T. M.

    1972-01-01

    Sensor generates interference fringes varying in number (horizontally and vertically) as a function of the total angular deviation relative to the line-of-sight axis. Device eliminates errors from zero or null shift due to lack of electrical circuitry stability.

  8. An improved methodology for heliostat testing and evaluation at the Plataforma Solar de Almería

    NASA Astrophysics Data System (ADS)

    Monterreal, Rafael; Enrique, Raúl; Fernández-Reche, Jesús

    2017-06-01

    The optical quality of a heliostat basically quantifies the difference between the scattering effects of the actual solar radiation reflected on its optical surface, compared to the so called canonical dispersion, that is, the one reflected on an optical surface free of constructional errors (paradigm). However, apart from the uncertainties of the measuring process itself, the value of the optical quality must be independent of the measuring instrument; so, any new measuring techniques that provide additional information about the error sources on the heliostat reflecting surface would be welcome. That error sources are responsible for the final optical quality value, with different degrees of influence. For the constructor of heliostats it will be extremely useful to know the value of the classical sources of error and their weight on the overall optical quality of a heliostat, such as facets geometry or focal length, as well as the characteristics of the heliostat as a whole, i.e., its geometry, focal length, facets misalignment and also the possible dependence of these effects with mechanical and/or meteorological factors. It is the goal of the present paper to unfold these optical quality error sources by exploring directly the reflecting surface of the heliostat with the help of a laser-scanner device and link the result with the traditional methods of heliostat evaluation at the Plataforma Solar de Almería.

  9. A "Stepping Stone" Approach for Obtaining Quantum Free Energies of Hydration.

    PubMed

    Sampson, Chris; Fox, Thomas; Tautermann, Christofer S; Woods, Christopher; Skylaris, Chris-Kriton

    2015-06-11

    We present a method which uses DFT (quantum, QM) calculations to improve free energies of binding computed with classical force fields (classical, MM). To overcome the incomplete overlap of configurational spaces between MM and QM, we use a hybrid Monte Carlo approach to generate quickly correct ensembles of structures of intermediate states between a MM and a QM/MM description, hence taking into account a great fraction of the electronic polarization of the quantum system, while being able to use thermodynamic integration to compute the free energy of transition between the MM and QM/MM. Then, we perform a final transition from QM/MM to full QM using a one-step free energy perturbation approach. By using QM/MM as a stepping stone toward the full QM description, we find very small convergence errors (<1 kJ/mol) in the transition to full QM. We apply this method to compute hydration free energies, and we obtain consistent improvements over the MM values for all molecules we used in this study. This approach requires large-scale DFT calculations as the full QM systems involved the ligands and all waters in their simulation cells, so the linear-scaling DFT code ONETEP was used for these calculations.

  10. The combination of circle topology and leaky integrator neurons remarkably improves the performance of echo state network on time series prediction.

    PubMed

    Xue, Fangzheng; Li, Qian; Li, Xiumin

    2017-01-01

    Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction.

  11. Persistence of plasmids, cholera toxin genes, and prophage DNA in classical Vibrio cholerae O1.

    PubMed

    Cook, W L; Wachsmuth, K; Johnson, S R; Birkness, K A; Samadi, A R

    1984-07-01

    Plasmid profiles, the location of cholera toxin subunit A genes, and the presence of the defective VcA1 prophage genome in classical Vibrio cholerae isolated from patients in Bangladesh in 1982 were compared with those in older classical strains isolated during the sixth pandemic and with those in selected eltor and nontoxigenic O1 isolates. Classical strains typically had two plasmids (21 and 3 megadaltons), eltor strains typically had no plasmids, and nontoxigenic O1 strains had zero to three plasmids. The old and new isolates of classical V. cholerae had two HindIII chromosomal digest fragments containing cholera toxin subunit A genes, whereas the eltor strains from Eastern countries had one fragment. The eltor strains from areas surrounding the Gulf of Mexico also had two subunit A gene fragments, which were smaller and easily distinguished from the classical pattern. All classical strains had 8 to 10 HindIII fragments containing the defective VcA1 prophage genome; none of the Eastern eltor strains had these genes, and the Gulf Coast eltor strains contained a different array of weakly hybridizing genes. These data suggest that the recent isolates of classical cholera in Bangladesh are closely related to the bacterial strain(s) which caused classical cholera during the sixth pandemic. These data do not support hypotheses that either the eltor or the nontoxigenic O1 strains are precursors of the new classical strains.

  12. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  13. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  14. Secure Communication via a Recycling of Attenuated Classical Signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, IV, Amos M.

    We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.

  15. Secure Communication via a Recycling of Attenuated Classical Signals

    DOE PAGES

    Smith, IV, Amos M.

    2017-01-12

    We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.

  16. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for characterizing the error in precipitation by SM2RAIN would be highly useful for the merging and the integration steps in its algorithm, i.e., the merging of multiple soil moisture derived products (e.g., SMAP, SMOS, ASCAT) and the integration of soil moisture derived and state of the art satellite precipitation products (e.g., GPM IMERG).

  17. Can the measurement of the cross-section of proton-capture on beryllium-7 be improved

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, C.

    1993-01-01

    The solar neutrino problem'' arises from the discrepancy between the observations of solar neutrinos fluxes in experiments at Homestake and Kamiokande and the solar model predictions of those fluxes. Both experiments, which are sensitive mainly to high-energy neutrinos, observe fewer neutrinos than predicted by solar models. Most of the expected high-energy solar neutrinos come from the beta-decay of [sup 8]B, which is produced in the reaction [sup 7]Be(p,[gamma])[sup 8]B. A study of all of the measurements to date of the zero-energy S-factor for the reaction [sup 7]Be(p,[gamma])[sup 8]B concludes that S[sub 17](0) = 0.0224 +[plus minus] 0.0021 keV-barn. Although amore » 10% error in S[sub 17](0) alone wig not solve the solar neutrino problem, it would still be useful to nail down all of the inputs of the solar models as well as possible. This serves to guard against the possibility that a conspiracy among the errors might be the source of the discrepancy and provides tighter constraints on the new physics'' interpretations of the experimentally measured solar neutrino spectrum. In this paper, we examine several ways of improving this measurement. None appear to offer a significant improvement over past experiments.« less

  18. Can the measurement of the cross-section of proton-capture on beryllium-7 be improved?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, C.

    1993-01-01

    The solar neutrino ``problem`` arises from the discrepancy between the observations of solar neutrinos fluxes in experiments at Homestake and Kamiokande and the solar model predictions of those fluxes. Both experiments, which are sensitive mainly to high-energy neutrinos, observe fewer neutrinos than predicted by solar models. Most of the expected high-energy solar neutrinos come from the beta-decay of {sup 8}B, which is produced in the reaction {sup 7}Be(p,{gamma}){sup 8}B. A study of all of the measurements to date of the zero-energy S-factor for the reaction {sup 7}Be(p,{gamma}){sup 8}B concludes that S{sub 17}(0) = 0.0224 +{plus_minus} 0.0021 keV-barn. Although a 10%more » error in S{sub 17}(0) alone wig not solve the solar neutrino problem, it would still be useful to nail down all of the inputs of the solar models as well as possible. This serves to guard against the possibility that a conspiracy among the errors might be the source of the discrepancy and provides tighter constraints on the ``new physics`` interpretations of the experimentally measured solar neutrino spectrum. In this paper, we examine several ways of improving this measurement. None appear to offer a significant improvement over past experiments.« less

  19. Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Flegg, Mark B.; Hellander, Stefan; Erban, Radek

    2015-05-01

    In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.

  20. Rotational quenching of H2O by He: mixed quantum/classical theory and comparison with quantum results.

    PubMed

    Ivanov, Mikhail; Dubernet, Marie-Lise; Babikov, Dmitri

    2014-04-07

    The mixed quantum/classical theory (MQCT) formulated in the space-fixed reference frame is used to compute quenching cross sections of several rotationally excited states of water molecule by impact of He atom in a broad range of collision energies, and is tested against the full-quantum calculations on the same potential energy surface. In current implementation of MQCT method, there are two major sources of errors: one affects results at energies below 10 cm(-1), while the other shows up at energies above 500 cm(-1). Namely, when the collision energy E is below the state-to-state transition energy ΔE the MQCT method becomes less accurate due to its intrinsic classical approximation, although employment of the average-velocity principle (scaling of collision energy in order to satisfy microscopic reversibility) helps dramatically. At higher energies, MQCT is expected to be accurate but in current implementation, in order to make calculations computationally affordable, we had to cut off the basis set size. This can be avoided by using a more efficient body-fixed formulation of MQCT. Overall, the errors of MQCT method are within 20% of the full-quantum results almost everywhere through four-orders-of-magnitude range of collision energies, except near resonances, where the errors are somewhat larger.

  1. Unbounded number of channel uses may be required to detect quantum capacity.

    PubMed

    Cubitt, Toby; Elkouss, David; Matthews, William; Ozols, Maris; Pérez-García, David; Strelchuk, Sergii

    2015-03-31

    Transmitting data reliably over noisy communication channels is one of the most important applications of information theory, and is well understood for channels modelled by classical physics. However, when quantum effects are involved, we do not know how to compute channel capacities. This is because the formula for the quantum capacity involves maximizing the coherent information over an unbounded number of channel uses. In fact, entanglement across channel uses can even increase the coherent information from zero to non-zero. Here we study the number of channel uses necessary to detect positive coherent information. In all previous known examples, two channel uses already sufficed. It might be that only a finite number of channel uses is always sufficient. We show that this is not the case: for any number of uses, there are channels for which the coherent information is zero, but which nonetheless have capacity.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.

    We consider the sign problem for classical spin models at complexmore » $$\\beta =1/g_0^2$$ on $$L\\times L$$ lattices. We show that the tensor renormalization group method allows reliable calculations for larger Im$$\\beta$$ than the reweighting Monte Carlo method. For the Ising model with complex $$\\beta$$ we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the TRG method. We check the convergence of the TRG method for the O(2) model on $$L\\times L$$ lattices when the number of states $$D_s$$ increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.« less

  3. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  4. Demonstrating the Difference between Classical Test Theory and Item Response Theory Using Derived Test Data

    ERIC Educational Resources Information Center

    Magno, Carlo

    2009-01-01

    The present report demonstrates the difference between classical test theory (CTT) and item response theory (IRT) approach using an actual test data for chemistry junior high school students. The CTT and IRT were compared across two samples and two forms of test on their item difficulty, internal consistency, and measurement errors. The specific…

  5. Identification of natural frequencies and modal damping ratios of aerospace structures from response data

    NASA Technical Reports Server (NTRS)

    Michalopoulos, C. D.

    1976-01-01

    An analysis of one and multidegree of freedom systems with classical damping is presented. Definition and minimization of error functions for each system are discussed. Systems with classical and nonclassical normal modes are studied, and results for first order perturbation are given. An alternative method of matching power spectral densities is provided, and numerical results are reviewed.

  6. A Piece of Paper Falling Faster than Free Fall

    ERIC Educational Resources Information Center

    Vera, F.; Rivera, R.

    2011-01-01

    We report a simple experiment that clearly demonstrates a common error in the explanation of the classic experiment where a small piece of paper is put over a book and the system is let fall. This classic demonstration is used in introductory physics courses to show that after eliminating the friction force with the air, the piece of paper falls…

  7. A method of immediate detection of objects with a near-zero apparent motion in series of CCD-frames

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Khlamov, S. V.; Vavilova, I. B.; Briukhovetskyi, A. B.; Pohorelov, A. V.; Mkrtichian, D. E.; Kudak, V. I.; Pakuliak, L. K.; Dikov, E. N.; Melnik, R. G.; Vlasenko, V. P.; Reichart, D. E.

    2018-01-01

    The paper deals with a computational method for detection of the solar system minor bodies (SSOs), whose inter-frame shifts in series of CCD-frames during the observation are commensurate with the errors in measuring their positions. These objects have velocities of apparent motion between CCD-frames not exceeding three rms errors (3σ) of measurements of their positions. About 15% of objects have a near-zero apparent motion in CCD-frames, including the objects beyond the Jupiter's orbit as well as the asteroids heading straight to the Earth. The proposed method for detection of the object's near-zero apparent motion in series of CCD-frames is based on the Fisher f-criterion instead of using the traditional decision rules that are based on the maximum likelihood criterion. We analyzed the quality indicators of detection of the object's near-zero apparent motion applying statistical and in situ modeling techniques in terms of the conditional probability of the true detection of objects with a near-zero apparent motion. The efficiency of method being implemented as a plugin for the Collection Light Technology (CoLiTec) software for automated asteroids and comets detection has been demonstrated. Among the objects discovered with this plugin, there was the sungrazing comet C/2012 S1 (ISON). Within 26 min of the observation, the comet's image has been moved by three pixels in a series of four CCD-frames (the velocity of its apparent motion at the moment of discovery was equal to 0.8 pixels per CCD-frame; the image size on the frame was about five pixels). Next verification in observations of asteroids with a near-zero apparent motion conducted with small telescopes has confirmed an efficiency of the method even in bad conditions (strong backlight from the full Moon). So, we recommend applying the proposed method for series of observations with four or more frames.

  8. An Introduction to the Sources of Delivery Error for Direct-Fire Ballistic Projectiles

    DTIC Science & Technology

    2013-07-01

    Ballistic mismatch has also been used to quantify the difference in target impacts using different gun tubes ...the angle between the local “upwards” direction of the gun tube and the vertical direction as defined by gravity. Cant results from the gun tube ...Determining Optimal Tube Shape for Reduction of Jump Error for Tank Fleets Using Fleet Zero. Presented at the 20th International Symposium on Ballistics

  9. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  10. Correcting for deformation in skin-based marker systems.

    PubMed

    Alexander, E J; Andriacchi, T P

    2001-03-01

    A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.

  11. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.

  12. Classical impurities and boundary Majorana zero modes in quantum chains

    NASA Astrophysics Data System (ADS)

    Müller, Markus; Nersesyan, Alexander A.

    2016-09-01

    We study the response of classical impurities in quantum Ising chains. The Z2 degeneracy they entail renders the existence of two decoupled Majorana modes at zero energy, an exact property of a finite system at arbitrary values of its bulk parameters. We trace the evolution of these modes across the transition from the disordered phase to the ordered one and analyze the concomitant qualitative changes of local magnetic properties of an isolated impurity. In the disordered phase, the two ground states differ only close to the impurity, and they are related by the action of an explicitly constructed quasi-local operator. In this phase the local transverse spin susceptibility follows a Curie law. The critical response of a boundary impurity is logarithmically divergent and maps to the two-channel Kondo problem, while it saturates for critical bulk impurities, as well as in the ordered phase. The results for the Ising chain translate to the related problem of a resonant level coupled to a 1d p-wave superconductor or a Peierls chain, whereby the magnetic order is mapped to topological order. We find that the topological phase always exhibits a continuous impurity response to local fields as a result of the level repulsion of local levels from the boundary Majorana zero mode. In contrast, the disordered phase generically features a discontinuous magnetization or charging response. This difference constitutes a general thermodynamic fingerprint of topological order in phases with a bulk gap.

  13. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  14. Experiments on active isolation using distributed PVDF error sensors

    NASA Technical Reports Server (NTRS)

    Lefebvre, S.; Guigou, C.; Fuller, C. R.

    1992-01-01

    A control system based on a two-channel narrow-band LMS algorithm is used to isolate periodic vibration at low frequencies on a structure composed of a rigid top plate mounted on a flexible receiving plate. The control performance of distributed PVDF error sensors and accelerometer point sensors is compared. For both sensors, high levels of global reduction, up to 32 dB, have been obtained. It is found that, by driving the PVDF strip output voltage to zero, the controller may force the structure to vibrate so that the integration of the strain under the length of the PVDF strip is zero. This ability of the PVDF sensors to act as spatial filters is especially relevant in active control of sound radiation. It is concluded that the PVDF sensors are flexible, nonfragile, and inexpensive and can be used as strain sensors for active control applications of vibration isolation and sound radiation.

  15. A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.

    PubMed

    Ahn, C B; Cho, Z H

    1987-01-01

    A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

  16. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution.

    PubMed

    Djordjevic, Ivan B

    2015-08-24

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled.

  17. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution

    PubMed Central

    Djordjevic, Ivan B.

    2015-01-01

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled. PMID:26305258

  18. Gaia Data Release 1. Testing parallaxes with local Cepheids and RR Lyrae stars

    NASA Astrophysics Data System (ADS)

    Gaia Collaboration; Clementini, G.; Eyer, L.; Ripepi, V.; Marconi, M.; Muraveva, T.; Garofalo, A.; Sarro, L. M.; Palmer, M.; Luri, X.; Molinaro, R.; Rimoldini, L.; Szabados, L.; Musella, I.; Anderson, R. I.; Prusti, T.; de Bruijne, J. H. J.; Brown, A. G. A.; Vallenari, A.; Babusiaux, C.; Bailer-Jones, C. A. L.; Bastian, U.; Biermann, M.; Evans, D. W.; Jansen, F.; Jordi, C.; Klioner, S. A.; Lammers, U.; Lindegren, L.; Mignard, F.; Panem, C.; Pourbaix, D.; Randich, S.; Sartoretti, P.; Siddiqui, H. I.; Soubiran, C.; Valette, V.; van Leeuwen, F.; Walton, N. A.; Aerts, C.; Arenou, F.; Cropper, M.; Drimmel, R.; Høg, E.; Katz, D.; Lattanzi, M. G.; O'Mullane, W.; Grebel, E. K.; Holland, A. D.; Huc, C.; Passot, X.; Perryman, M.; Bramante, L.; Cacciari, C.; Castañeda, J.; Chaoul, L.; Cheek, N.; De Angeli, F.; Fabricius, C.; Guerra, R.; Hernández, J.; Jean-Antoine-Piccolo, A.; Masana, E.; Messineo, R.; Mowlavi, N.; Nienartowicz, K.; Ordóñez-Blanco, D.; Panuzzo, P.; Portell, J.; Richards, P. J.; Riello, M.; Seabroke, G. M.; Tanga, P.; Thévenin, F.; Torra, J.; Els, S. G.; Gracia-Abril, G.; Comoretto, G.; Garcia-Reinaldos, M.; Lock, T.; Mercier, E.; Altmann, M.; Andrae, R.; Astraatmadja, T. L.; Bellas-Velidis, I.; Benson, K.; Berthier, J.; Blomme, R.; Busso, G.; Carry, B.; Cellino, A.; Cowell, S.; Creevey, O.; Cuypers, J.; Davidson, M.; De Ridder, J.; de Torres, A.; Delchambre, L.; Dell'Oro, A.; Ducourant, C.; Frémat, Y.; García-Torres, M.; Gosset, E.; Halbwachs, J.-L.; Hambly, N. C.; Harrison, D. L.; Hauser, M.; Hestroffer, D.; Hodgkin, S. T.; Huckle, H. E.; Hutton, A.; Jasniewicz, G.; Jordan, S.; Kontizas, M.; Korn, A. J.; Lanzafame, A. C.; Manteiga, M.; Moitinho, A.; Muinonen, K.; Osinde, J.; Pancino, E.; Pauwels, T.; Petit, J.-M.; Recio-Blanco, A.; Robin, A. C.; Siopis, C.; Smith, M.; Smith, K. W.; Sozzetti, A.; Thuillot, W.; van Reeven, W.; Viala, Y.; Abbas, U.; Abreu Aramburu, A.; Accart, S.; Aguado, J. J.; Allan, P. M.; Allasia, W.; Altavilla, G.; Álvarez, M. A.; Alves, J.; Andrei, A. H.; Anglada Varela, E.; Antiche, E.; Antoja, T.; Antón, S.; Arcay, B.; Bach, N.; Baker, S. G.; Balaguer-Núñez, L.; Barache, C.; Barata, C.; Barbier, A.; Barblan, F.; Barrado y Navascués, D.; Barros, M.; Barstow, M. A.; Becciani, U.; Bellazzini, M.; Bello García, A.; Belokurov, V.; Bendjoya, P.; Berihuete, A.; Bianchi, L.; Bienaymé, O.; Billebaud, F.; Blagorodnova, N.; Blanco-Cuaresma, S.; Boch, T.; Bombrun, A.; Borrachero, R.; Bouquillon, S.; Bourda, G.; Bragaglia, A.; Breddels, M. A.; Brouillet, N.; Brüsemeister, T.; Bucciarelli, B.; Burgess, P.; Burgon, R.; Burlacu, A.; Busonero, D.; Buzzi, R.; Caffau, E.; Cambras, J.; Campbell, H.; Cancelliere, R.; Cantat-Gaudin, T.; Carlucci, T.; Carrasco, J. M.; Castellani, M.; Charlot, P.; Charnas, J.; Chiavassa, A.; Clotet, M.; Cocozza, G.; Collins, R. S.; Costigan, G.; Crifo, F.; Cross, N. J. G.; Crosta, M.; Crowley, C.; Dafonte, C.; Damerdji, Y.; Dapergolas, A.; David, P.; David, M.; De Cat, P.; de Felice, F.; de Laverny, P.; De Luise, F.; De March, R.; de Souza, R.; Debosscher, J.; del Pozo, E.; Delbo, M.; Delgado, A.; Delgado, H. E.; Di Matteo, P.; Diakite, S.; Distefano, E.; Dolding, C.; Dos Anjos, S.; Drazinos, P.; Durán, J.; Dzigan, Y.; Edvardsson, B.; Enke, H.; Evans, N. W.; Eynard Bontemps, G.; Fabre, C.; Fabrizio, M.; Falcão, A. J.; Farràs Casas, M.; Federici, L.; Fedorets, G.; Fernández-Hernández, J.; Fernique, P.; Fienga, A.; Figueras, F.; Filippi, F.; Findeisen, K.; Fonti, A.; Fouesneau, M.; Fraile, E.; Fraser, M.; Fuchs, J.; Gai, M.; Galleti, S.; Galluccio, L.; Garabato, D.; García-Sedano, F.; Garralda, N.; Gavras, P.; Gerssen, J.; Geyer, R.; Gilmore, G.; Girona, S.; Giuffrida, G.; Gomes, M.; González-Marcos, A.; González-Núñez, J.; González-Vidal, J. J.; Granvik, M.; Guerrier, A.; Guillout, P.; Guiraud, J.; Gúrpide, A.; Gutiérrez-Sánchez, R.; Guy, L. P.; Haigron, R.; Hatzidimitriou, D.; Haywood, M.; Heiter, U.; Helmi, A.; Hobbs, D.; Hofmann, W.; Holl, B.; Holland, G.; Hunt, J. A. S.; Hypki, A.; Icardi, V.; Irwin, M.; Jevardat de Fombelle, G.; Jofré, P.; Jonker, P. G.; Jorissen, A.; Julbe, F.; Karampelas, A.; Kochoska, A.; Kohley, R.; Kolenberg, K.; Kontizas, E.; Koposov, S. E.; Kordopatis, G.; Koubsky, P.; Krone-Martins, A.; Kudryashova, M.; Bachchan, R. K.; Lacoste-Seris, F.; Lanza, A. F.; Lavigne, J.-B.; Le Poncin-Lafitte, C.; Lebreton, Y.; Lebzelter, T.; Leccia, S.; Leclerc, N.; Lecoeur-Taibi, I.; Lemaitre, V.; Lenhardt, H.; Leroux, F.; Liao, S.; Licata, E.; Lindstrøm, H. E. P.; Lister, T. A.; Livanou, E.; Lobel, A.; Löffler, W.; López, M.; Lorenz, D.; MacDonald, I.; Magalhães Fernandes, T.; Managau, S.; Mann, R. G.; Mantelet, G.; Marchal, O.; Marchant, J. M.; Marinoni, S.; Marrese, P. M.; Marschalkó, G.; Marshall, D. J.; Martín-Fleitas, J. M.; Martino, M.; Mary, N.; Matijevič, G.; McMillan, P. J.; Messina, S.; Michalik, D.; Millar, N. R.; Miranda, B. M. H.; Molina, D.; Molinaro, M.; Molnár, L.; Moniez, M.; Montegriffo, P.; Mor, R.; Mora, A.; Morbidelli, R.; Morel, T.; Morgenthaler, S.; Morris, D.; Mulone, A. F.; Narbonne, J.; Nelemans, G.; Nicastro, L.; Noval, L.; Ordénovic, C.; Ordieres-Meré, J.; Osborne, P.; Pagani, C.; Pagano, I.; Pailler, F.; Palacin, H.; Palaversa, L.; Parsons, P.; Pecoraro, M.; Pedrosa, R.; Pentikäinen, H.; Pichon, B.; Piersimoni, A. M.; Pineau, F.-X.; Plachy, E.; Plum, G.; Poujoulet, E.; Prša, A.; Pulone, L.; Ragaini, S.; Rago, S.; Rambaux, N.; Ramos-Lerate, M.; Ranalli, P.; Rauw, G.; Read, A.; Regibo, S.; Reylé, C.; Ribeiro, R. A.; Riva, A.; Rixon, G.; Roelens, M.; Romero-Gómez, M.; Rowell, N.; Royer, F.; Ruiz-Dern, L.; Sadowski, G.; Sagristà Sellés, T.; Sahlmann, J.; Salgado, J.; Salguero, E.; Sarasso, M.; Savietto, H.; Schultheis, M.; Sciacca, E.; Segol, M.; Segovia, J. C.; Segransan, D.; Shih, I.-C.; Smareglia, R.; Smart, R. L.; Solano, E.; Solitro, F.; Sordo, R.; Soria Nieto, S.; Souchay, J.; Spagna, A.; Spoto, F.; Stampa, U.; Steele, I. A.; Steidelmüller, H.; Stephenson, C. A.; Stoev, H.; Suess, F. F.; Süveges, M.; Surdej, J.; Szegedi-Elek, E.; Tapiador, D.; Taris, F.; Tauran, G.; Taylor, M. B.; Teixeira, R.; Terrett, D.; Tingley, B.; Trager, S. C.; Turon, C.; Ulla, A.; Utrilla, E.; Valentini, G.; van Elteren, A.; Van Hemelryck, E.; van Leeuwen, M.; Varadi, M.; Vecchiato, A.; Veljanoski, J.; Via, T.; Vicente, D.; Vogt, S.; Voss, H.; Votruba, V.; Voutsinas, S.; Walmsley, G.; Weiler, M.; Weingrill, K.; Wevers, T.; Wyrzykowski, Ł.; Yoldas, A.; Žerjal, M.; Zucker, S.; Zurbach, C.; Zwitter, T.; Alecu, A.; Allen, M.; Allende Prieto, C.; Amorim, A.; Anglada-Escudé, G.; Arsenijevic, V.; Azaz, S.; Balm, P.; Beck, M.; Bernstein, H.-H.; Bigot, L.; Bijaoui, A.; Blasco, C.; Bonfigli, M.; Bono, G.; Boudreault, S.; Bressan, A.; Brown, S.; Brunet, P.-M.; Bunclark, P.; Buonanno, R.; Butkevich, A. G.; Carret, C.; Carrion, C.; Chemin, L.; Chéreau, F.; Corcione, L.; Darmigny, E.; de Boer, K. S.; de Teodoro, P.; de Zeeuw, P. T.; Delle Luche, C.; Domingues, C. D.; Dubath, P.; Fodor, F.; Frézouls, B.; Fries, A.; Fustes, D.; Fyfe, D.; Gallardo, E.; Gallegos, J.; Gardiol, D.; Gebran, M.; Gomboc, A.; Gómez, A.; Grux, E.; Gueguen, A.; Heyrovsky, A.; Hoar, J.; Iannicola, G.; Isasi Parache, Y.; Janotto, A.-M.; Joliet, E.; Jonckheere, A.; Keil, R.; Kim, D.-W.; Klagyivik, P.; Klar, J.; Knude, J.; Kochukhov, O.; Kolka, I.; Kos, J.; Kutka, A.; Lainey, V.; LeBouquin, D.; Liu, C.; Loreggia, D.; Makarov, V. V.; Marseille, M. G.; Martayan, C.; Martinez-Rubi, O.; Massart, B.; Meynadier, F.; Mignot, S.; Munari, U.; Nguyen, A.-T.; Nordlander, T.; O'Flaherty, K. S.; Ocvirk, P.; Olias Sanz, A.; Ortiz, P.; Osorio, J.; Oszkiewicz, D.; Ouzounis, A.; Park, P.; Pasquato, E.; Peltzer, C.; Peralta, J.; Péturaud, F.; Pieniluoma, T.; Pigozzi, E.; Poels, J.; Prat, G.; Prod'homme, T.; Raison, F.; Rebordao, J. M.; Risquez, D.; Rocca-Volmerange, B.; Rosen, S.; Ruiz-Fuertes, M. I.; Russo, F.; Serraller Vizcaino, I.; Short, A.; Siebert, A.; Silva, H.; Sinachopoulos, D.; Slezak, E.; Soffel, M.; Sosnowska, D.; Straižys, V.; ter Linden, M.; Terrell, D.; Theil, S.; Tiede, C.; Troisi, L.; Tsalmantza, P.; Tur, D.; Vaccari, M.; Vachier, F.; Valles, P.; Van Hamme, W.; Veltz, L.; Virtanen, J.; Wallut, J.-M.; Wichmann, R.; Wilkinson, M. I.; Ziaeepour, H.; Zschocke, S.

    2017-09-01

    Context. Parallaxes for 331 classical Cepheids, 31 Type II Cepheids, and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of the Tycho-Gaia Astrometric Solution (TGAS). Aims: In order to test these first parallax measurements of the primary standard candles of the cosmological distance ladder, which involve astrometry collected by Gaia during the initial 14 months of science operation, we compared them with literature estimates and derived new period-luminosity (PL), period-Wesenheit (PW) relations for classical and Type II Cepheids and infrared PL, PL-metallicity (PLZ), and optical luminosity-metallicity (MV-[Fe/H]) relations for the RR Lyrae stars, with zero points based on TGAS. Methods: Classical Cepheids were carefully selected in order to discard known or suspected binary systems. The final sample comprises 102 fundamental mode pulsators with periods ranging from 1.68 to 51.66 days (of which 33 with σϖ/ϖ< 0.5). The Type II Cepheids include a total of 26 W Virginis and BL Herculis stars spanning the period range from 1.16 to 30.00 days (of which only 7 with σϖ/ϖ< 0.5). The RR Lyrae stars include 200 sources with pulsation period ranging from 0.27 to 0.80 days (of which 112 with σϖ/ϖ< 0.5). The new relations were computed using multi-band (V,I,J,Ks) photometry and spectroscopic metal abundances available in the literature, and by applying three alternative approaches: (I) linear least-squares fitting of the absolute magnitudes inferred from direct transformation of the TGAS parallaxes; (II) adopting astrometry-based luminosities; and (III) using a Bayesian fitting approach. The last two methods work in parallax space where parallaxes are used directly, thus maintaining symmetrical errors and allowing negative parallaxes to be used. The TGAS-based PL,PW,PLZ, and MV- [Fe/H] relations are discussed by comparing the distance to the Large Magellanic Cloud provided by different types of pulsating stars and alternative fitting methods. Results: Good agreement is found from direct comparison of the parallaxes of RR Lyrae stars for which both TGAS and HST measurements are available. Similarly, very good agreement is found between the TGAS values and the parallaxes inferred from the absolute magnitudes of Cepheids and RR Lyrae stars analysed with the Baade-Wesselink method. TGAS values also compare favourably with the parallaxes inferred by theoretical model fitting of the multi-band light curves for two of the three classical Cepheids and one RR Lyrae star, which were analysed with this technique in our samples. The K-band PL relations show the significant improvement of the TGAS parallaxes for Cepheids and RR Lyrae stars with respect to the Hipparcos measurements. This is particularly true for the RR Lyrae stars for which improvement in quality and statistics is impressive. Conclusions: TGAS parallaxes bring a significant added value to the previous Hipparcos estimates. The relations presented in this paper represent the first Gaia-calibrated relations and form a work-in-progress milestone report in the wait for Gaia-only parallaxes of which a first solution will become available with Gaia Data Release 2 (DR2) in 2018. Full Tables A.1-A.3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/605/A79

  19. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-06-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b4 of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4 , our b4 agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  20. Zero suppression logic of the ALICE muon forward tracker pixel chip prototype PIXAM and associated readout electronics development

    NASA Astrophysics Data System (ADS)

    Flouzat, C.; Değerli, Y.; Guilloux, F.; Orsini, F.; Venault, P.

    2015-05-01

    In the framework of the ALICE experiment upgrade at HL-LHC, a new forward tracking detector, the Muon Forward Tracker (MFT), is foreseen to overcome the intrinsic limitations of the present Muon Spectrometer and will perform new measurements of general interest for the whole ALICE physics. To fulfill the new detector requirements, CMOS Monolithic Active Pixel Sensors (MAPS) provide an attractive trade-off between readout speed, spatial resolution, radiation hardness, granularity, power consumption and material budget. This technology has been chosen to equip the Muon Forward Tracker and also the vertex detector: the Inner Tracking System (ITS). Since few years, an intensive R&D program has been performed on the design of MAPS in the 0.18 μ m CMOS Image Sensor (CIS) process. In order to avoid pile up effects in the experiment, the classical rolling shutter readout system of MAPS has been improved to overcome the readout speed limitation. A zero suppression algorithm, based on a 3 by 3 cluster finding (position and data), has been chosen for the MFT. This algorithm allows adequate data compression for the sensor. This paper presents the large size prototype PIXAM, which represents 1/3 of the final chip, and will focus specially on the zero suppression block architecture. This chip is designed and under fabrication in the 0.18 μ m CIS process. Finally, the readout electronics principle to send out the compressed data flow is also presented taking into account the cluster occupancy per MFT plane for a single central Pb-Pb collision.

  1. Axion as a cold dark matter candidate: analysis to third order perturbation for classical axion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noh, Hyerim; Hwang, Jai-chan; Park, Chan-Gyung, E-mail: hr@kasi.re.kr, E-mail: jchan@knu.ac.kr, E-mail: park.chan.gyung@gmail.com

    2015-12-01

    We investigate aspects of axion as a coherently oscillating massive classical scalar field by analyzing third order perturbations in Einstein's gravity in the axion-comoving gauge. The axion fluid has its characteristic pressure term leading to an axion Jeans scale which is cosmologically negligible for a canonical axion mass. Our classically derived axion pressure term in Einstein's gravity is identical to the one derived in the non-relativistic quantum mechanical context in the literature. We present the general relativistic continuity and Euler equations for an axion fluid valid up to third order perturbation. Equations for axion are exactly the same as thatmore » of a zero-pressure fluid in Einstein's gravity except for an axion pressure term in the Euler equation. Our analysis includes the cosmological constant.« less

  2. Insufficiency of the Young’s modulus for illustrating the mechanical behavior of GaN nanowires

    NASA Astrophysics Data System (ADS)

    Zamani Kouhpanji, Mohammad Reza; Behzadirad, Mahmoud; Feezell, Daniel; Busani, Tito

    2018-05-01

    We use a non-classical modified couple stress theory including the acceleration gradients (MCST-AG), to precisely demonstrate the size dependency of the mechanical properties of gallium nitride (GaN) nanowires (NWs). The fundamental elastic constants, Young’s modulus and length scales of the GaN NWs were estimated both experimentally, using a novel experimental technique applied to atomic force microscopy, and theoretically, using atomic simulations. The Young’s modulus, static and the dynamic length scales, calculated with the MCST-AG, were found to be 323 GPa, 13 and 14.5 nm, respectively, for GaN NWs from a few nanometers radii to bulk radii. Analyzing the experimental data using the classical continuum theory shows an improvement in the experimental results by introducing smaller error. Using the length scales determined in MCST-AG, we explain the inconsistency of the Young’s moduli reported in recent literature, and we prove the insufficiency of the Young’s modulus for predicting the mechanical behavior of GaN NWs.

  3. Examining the accuracy of astrophysical disk simulations with a generalized hydrodynamical test problem [The role of pressure and viscosity in SPH simulations of astrophysical disks

    DOE PAGES

    Raskin, Cody; Owen, J. Michael

    2016-10-24

    Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extensionmore » of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less

  4. Insufficiency of the Young's modulus for illustrating the mechanical behavior of GaN nanowires.

    PubMed

    Kouhpanji, Mohammad Reza Zamani; Behzadirad, Mahmoud; Feezell, Daniel; Busani, Tito

    2018-05-18

    We use a non-classical modified couple stress theory including the acceleration gradients (MCST-AG), to precisely demonstrate the size dependency of the mechanical properties of gallium nitride (GaN) nanowires (NWs). The fundamental elastic constants, Young's modulus and length scales of the GaN NWs were estimated both experimentally, using a novel experimental technique applied to atomic force microscopy, and theoretically, using atomic simulations. The Young's modulus, static and the dynamic length scales, calculated with the MCST-AG, were found to be 323 GPa, 13 and 14.5 nm, respectively, for GaN NWs from a few nanometers radii to bulk radii. Analyzing the experimental data using the classical continuum theory shows an improvement in the experimental results by introducing smaller error. Using the length scales determined in MCST-AG, we explain the inconsistency of the Young's moduli reported in recent literature, and we prove the insufficiency of the Young's modulus for predicting the mechanical behavior of GaN NWs.

  5. Non-linear quantum-classical scheme to simulate non-equilibrium strongly correlated fermionic many-body dynamics

    PubMed Central

    Kreula, J. M.; Clark, S. R.; Jaksch, D.

    2016-01-01

    We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673

  6. Quantum rewinding via phase estimation

    NASA Astrophysics Data System (ADS)

    Tabia, Gelo Noel

    2015-03-01

    In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.

  7. Zero-Sum Matrix Game with Payoffs of Dempster-Shafer Belief Structures and Its Applications on Sensors

    PubMed Central

    Deng, Xinyang; Jiang, Wen; Zhang, Jiandong

    2017-01-01

    The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156

  8. Explorations of the Gauss-Lucas Theorem

    ERIC Educational Resources Information Center

    Brilleslyper, Michael A.; Schaubroeck, Beth

    2017-01-01

    The Gauss-Lucas Theorem is a classical complex analysis result that states the critical points of a single-variable complex polynomial lie inside the closed convex hull of the zeros of the polynomial. Although the result is well-known, it is not typically presented in a first course in complex analysis. The ease with which modern technology allows…

  9. Measurement of static pressure on aircraft

    NASA Technical Reports Server (NTRS)

    Gracey, William

    1958-01-01

    Existing data on the errors involved in the measurement of static pressure by means of static-pressure tubes and fuselage vents are presented. The errors associated with the various design features of static-pressure tubes are discussed for the condition of zero angle of attack and for the case where the tube is inclined to flow. Errors which result from variations in the configuration of static-pressure vents are also presented. Errors due to the position of a static-pressure tube in the flow field of the airplane are given for locations ahead of the fuselage nose, ahead of the wing tip, and ahead of the vertical tail fin. The errors of static-pressure vents on the fuselage of an airplane are also presented. Various methods of calibrating static-pressure installations in flight are briefly discussed.

  10. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  11. Viète's Formula and an Error Bound without Taylor's Theorem

    ERIC Educational Resources Information Center

    Boucher, Chris

    2018-01-01

    This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.

  12. Global stability of steady states in the classical Stefan problem for general boundary shapes

    PubMed Central

    Hadžić, Mahir; Shkoller, Steve

    2015-01-01

    The classical one-phase Stefan problem (without surface tension) allows for a continuum of steady-state solutions, given by an arbitrary (but sufficiently smooth) domain together with zero temperature. We prove global-in-time stability of such steady states, assuming a sufficient degree of smoothness on the initial domain, but without any a priori restriction on the convexity properties of the initial shape. This is an extension of our previous result (Hadžić & Shkoller 2014 Commun. Pure Appl. Math. 68, 689–757 (doi:10.1002/cpa.21522)) in which we studied nearly spherical shapes. PMID:26261359

  13. Strange nucleon electromagnetic form factors from lattice QCD

    NASA Astrophysics Data System (ADS)

    Alexandrou, C.; Constantinou, M.; Hadjiyiannakou, K.; Jansen, K.; Kallidonis, C.; Koutsou, G.; Avilés-Casco, A. Vaquero

    2018-05-01

    We evaluate the strange nucleon electromagnetic form factors using an ensemble of gauge configurations generated with two degenerate maximally twisted mass clover-improved fermions with mass tuned to approximately reproduce the physical pion mass. In addition, we present results for the disconnected light quark contributions to the nucleon electromagnetic form factors. Improved stochastic methods are employed leading to high-precision results. The momentum dependence of the disconnected contributions is fitted using the model-independent z-expansion. We extract the magnetic moment and the electric and magnetic radii of the proton and neutron by including both connected and disconnected contributions. We find that the disconnected light quark contributions to both electric and magnetic form factors are nonzero and at the few percent level as compared to the connected. The strange form factors are also at the percent level but more noisy yielding statistical errors that are typically within one standard deviation from a zero value.

  14. Development of a Voice Activity Controlled Noise Canceller

    PubMed Central

    Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini

    2012-01-01

    In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667

  15. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  16. Evaluation of the Geopotential value for the Local Vertical Datum of China using GRACE/GOCE GGMs and GPS/Leveling Data

    NASA Astrophysics Data System (ADS)

    He, Lin; Li, Jiancheng; Chu, Yonghai; Zhang, Tengxu

    2017-04-01

    National height reference systems have conventionally been linked to the coastal local mean sea level, observed at one tide gauge, such as the China national height datum 1985. Due to the effect of the local sea surface topography, the reference level surface of local datum is inconsistent with the global datum or other local datum. In order to unify or connect the local datum to the global height datum, it is necessary to obtain the zero-height geopotential value of local datum or the height offset with respect to the global datum. The GRACE and GOCE satellite mission are promising for purposes of unification of local vertical datums because they have brought a significant improvement in modeling of low-frequency or rather medium-frequency part of the Earth's static gravity field in the past ten years. The focus of this work is directed to the evaluation of most available Global Geopotential Models (GGMs) from GOCE and GRACE, both satellite only as well as combined ones. From the evaluation with the 649 GPS/Levelling benchmarks (BMs) in China, the GOCE/GRACE GGMs provide the accuracy at 42-52cm level, up to their max degree and order. The latest release 5 DIR, TIM GGMs improve the accuracies by 6-10cm compared to the release 1 models. The DIR_R1 is based on the fewer GOCE data performs equally well with the DIR_R4 and DIR_R5 model, this is attributed to the fact that during its development which used a priori information from EIGEN-51C. The zero-height geopotential value W0LVD for the China Local Vertical Datum (LVD) is 62636855.1606m2s-2 from the originally GOCE/GRACE GGMs. Taking into account the GPS/Levelling data contains the full spectral information, and the GOCE-only or GRACE-GOCE combined model are limited to the long wavelengths. To improve the accuracy of the GGMs, it is indispensable to account for the remaining signal above this maximum degree, known as the omission error of the GGM. The effect of GRACE/GOCE omission error is investigated by extending the models with the high-resolution gravity field model EGM2008. In China, the effect of the GRACE/GOCE GGMs omission error is at the decimeter level. The combined GGMs (up to 2160 degree and order) could provide an accuracy at 20cm level, which is better than that from EGM2008. Meanwhile, if an appropriate degree and order is chosen for the GOCE-only or GRACE-GOCE combined GGMs to connect with the EGM2008, the extended GGMs provide an accuracy at 16cm level. From the extended GGMs, the geopotential value W0LVD determined for the China local vertical datum is 62636853.4351 m2s-2 indicates a bias of about 2.5649 m2/s-2 compared to the conventional value of 62,636,856.0 m2s-2. This is support by National key research and development program No:2016YFB0501702. Keywords: Global Geopotential Models; GRACE; GOCE; GPS/Levelling; zero-height geopotential

  17. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  18. Reducing number entry errors: solving a widespread, serious problem.

    PubMed

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  19. An improved input shaping design for an efficient sway control of a nonlinear 3D overhead crane with friction

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.

    2017-08-01

    This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.

  20. Wave-CAIPI ViSTa: highly accelerated whole-brain direct myelin water imaging with zero-padding reconstruction.

    PubMed

    Wu, Zhe; Bilgic, Berkin; He, Hongjian; Tong, Qiqi; Sun, Yi; Du, Yiping; Setsompop, Kawin; Zhong, Jianhui

    2018-09-01

    This study introduces a highly accelerated whole-brain direct visualization of short transverse relaxation time component (ViSTa) imaging using a wave controlled aliasing in parallel imaging (CAIPI) technique, for acquisition within a clinically acceptable scan time, with the preservation of high image quality and sufficient spatial resolution, and reduced residual point spread function artifacts. Double inversion RF pulses were applied to preserve the signal from short T 1 components for directly extracting myelin water signal in ViSTa imaging. A 2D simultaneous multislice and a 3D acquisition of ViSTa images incorporating wave-encoding were used for data acquisition. Improvements brought by a zero-padding method in wave-CAIPI reconstruction were also investigated. The zero-padding method in wave-CAIPI reconstruction reduced the root-mean-square errors between the wave-encoded and Cartesian gradient echoes for all wave gradient configurations in simulation, and reduced the side-main lobe intensity ratio from 34.5 to 16% in the thin-slab in vivo ViSTa images. In a 4 × acceleration simultaneous-multislice scenario, wave-CAIPI ViSTa achieved negligible g-factors (g mean /g max  = 1.03/1.10), while retaining minimal interslice artifacts. An 8 × accelerated acquisition of 3D wave-CAIPI ViSTa imaging covering the whole brain with 1.1 × 1.1 × 3 mm 3 voxel size was achieved within 15 minutes, and only incurred a small g-factor penalty (g mean /g max  = 1.05/1.16). Whole-brain ViSTa images were obtained within 15 minutes with negligible g-factor penalty by using wave-CAIPI acquisition and zero-padding reconstruction. The proposed zero-padding method was shown to be effective in reducing residual point spread function for wave-encoded images, particularly for ViSTa. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Color dithering methods for LEGO-like 3D printing

    NASA Astrophysics Data System (ADS)

    Sun, Pei-Li; Sie, Yuping

    2015-01-01

    Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.

  2. Human reliability in petrochemical industry: an action research.

    PubMed

    Silva, João Alexandre Pinheiro; Camarotto, João Alberto

    2012-01-01

    This paper aims to identify conflicts and gaps between the operators' strategies and actions and the organizational managerial approach for human reliability. In order to achieve these goals, the research approach adopted encompasses literature review, mixing action research methodology and Ergonomic Workplace Analysis in field research. The result suggests that the studied company has a classical and mechanistic point of view focusing on error identification and building barriers through procedures, checklists and other prescription alternatives to improve performance in reliability area. However, it was evident the fundamental role of the worker as an agent of maintenance and construction of system reliability during the action research cycle.

  3. Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Martin, Corless

    2004-01-01

    We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.

  4. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  5. On the decoding process in ternary error-correcting output codes.

    PubMed

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  6. Antigravity and the big crunch/big bang transition

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  7. Simulations and observations of cloudtop processes

    NASA Technical Reports Server (NTRS)

    Siems, S. T.; Bretherton, C. S.; Baker, M. B.

    1990-01-01

    Turbulent entrainment at zero mean shear stratified interfaces has been studied extensively in the laboratory and theoretically for the classical situation in which density is a passive tracer of the mixing and the turbulent motions producing the entrainment are directed toward the interface. It is the purpose of the numerical simulations and data analysis to investigate these processes and, specifically, to focus on the following questions: (1) Can local cooling below cloudtop play an important role in setting up convective circulations within the cloud, and bringing about entrainment; (2) Can Cloudtop Entrainment Instability (CEI) alone lead to runaway entrainment under geophysically realistic conditions; and (3) What are the important mechanisms of entrainment at cloudtop under zero or low mean shear conditions.

  8. Monitoring Instrument Performance in Regional Broadband Seismic Network Using Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Ye, F.; Lyu, S.; Lin, J.

    2017-12-01

    In the past ten years, the number of seismic stations has increased significantly, and regional seismic networks with advanced technology have been gradually developed all over the world. The resulting broadband data help to improve the seismological research. It is important to monitor the performance of broadband instruments in a new network in a long period of time to ensure the accuracy of seismic records. Here, we propose a method that uses ambient noise data in the period range 5-25 s to monitor instrument performance and check data quality in situ. The method is based on an analysis of amplitude and phase index parameters calculated from pairwise cross-correlations of three stations, which provides multiple references for reliable error estimates. Index parameters calculated daily during a two-year observation period are evaluated to identify stations with instrument response errors in near real time. During data processing, initial instrument responses are used in place of available instrument responses to simulate instrument response errors, which are then used to verify our results. We also examine feasibility of the tailing noise using data from stations selected from USArray in different locations and analyze the possible instrumental errors resulting in time-shifts used to verify the method. Additionally, we show an application that effects of instrument response errors that experience pole-zeros variations on monitoring temporal variations in crustal properties appear statistically significant velocity perturbation larger than the standard deviation. The results indicate that monitoring seismic instrument performance helps eliminate data pollution before analysis begins.

  9. Error analysis of the Golay3 optical imaging system.

    PubMed

    Wu, Quanying; Fan, Junliu; Wu, Feng; Zhao, Jun; Qian, Lin

    2013-05-01

    We use aberration theory to derive a generalized pupil function of the Golay3 imaging system when astigmatisms exist in its submirrors. Theoretical analysis and numerical simulation using ZEMAX show that the point spread function (PSF) and the modulation transfer function (MTF) of the Golay3 sparse aperture system have a periodic change when there are piston errors. When the peak-valley value of the wavefront (PV(tilt)) due to the tilt error increases from zero to λ, the PSF and the MTF change significantly, and the change direction is determined by the location of the submirror with the tilt error. When PV(tilt) becomes larger than λ, the PSF and the MTF remain unvaried. We calculate the peaks of the signal-to-noise ratio (PSNR) resulting from the piston and tilt errors according to the Strehl ratio, and show that the PSNR decreases when the errors increase.

  10. Analysis of Fluid Gauge Sensor for Zero or Microgravity Conditions using Finite Element Method

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar D.; Doiron, Terence a.

    2007-01-01

    In this paper the Finite Element Method (FEM) is presented for mass/volume gauging of a fluid in a tank subjected to zero or microgravity conditions. In this approach first mutual capacitances between electrodes embedded inside the tank are measured. Assuming the medium properties the mutual capacitances are also estimated using FEM approach. Using proper non-linear optimization the assumed properties are updated by minimizing the mean square error between estimated and measured capacitances values. Numerical results are presented to validate the present approach.

  11. Simultaneous classical communication and quantum key distribution using continuous variables*

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2016-10-01

    Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.

  12. A contemporary approach to the problem of determining physical parameters according to the results of measurements

    NASA Technical Reports Server (NTRS)

    Elyasberg, P. Y.

    1979-01-01

    The shortcomings of the classical approach are set forth, and the newer methods resulting from these shortcomings are explained. The problem was approached with the assumption that the probabilities of error were known, as well as without knowledge of the distribution of the probabilities of error. The advantages of the newer approach are discussed.

  13. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  14. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  15. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  16. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  17. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  18. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  19. An Improved Zero Potential Circuit for Readout of a Two-Dimensional Resistive Sensor Array

    PubMed Central

    Wu, Jian-Feng; Wang, Feng; Wang, Qi; Li, Jian-Qing; Song, Ai-Guo

    2016-01-01

    With one operational amplifier (op-amp) in negative feedback, the traditional zero potential circuit could access one element in the two-dimensional (2-D) resistive sensor array with the shared row-column fashion but it suffered from the crosstalk problem for the non-scanned elements’ bypass currents, which were injected into array’s non-scanned electrodes from zero potential. Firstly, for suppressing the crosstalk problem, we designed a novel improved zero potential circuit with one more op-amp in negative feedback to sample the total bypass current and calculate the precision resistance of the element being tested (EBT) with it. The improved setting non-scanned-electrode zero potential circuit (S-NSE-ZPC) was given as an example for analyzing and verifying the performance of the improved zero potential circuit. Secondly, in the S-NSE-ZPC and the improved S-NSE-ZPC, the effects of different parameters of the resistive sensor arrays and their readout circuits on the EBT’s measurement accuracy were simulated with the NI Multisim 12. Thirdly, part features of the improved circuit were verified with the experiments of a prototype circuit. Followed, the results were discussed and the conclusions were given. The experiment results show that the improved circuit, though it requires one more op-amp, one more resistor and one more sampling channel, can access the EBT in the 2-D resistive sensor array more accurately. PMID:27929410

  20. An Improved Zero Potential Circuit for Readout of a Two-Dimensional Resistive Sensor Array.

    PubMed

    Wu, Jian-Feng; Wang, Feng; Wang, Qi; Li, Jian-Qing; Song, Ai-Guo

    2016-12-06

    With one operational amplifier (op-amp) in negative feedback, the traditional zero potential circuit could access one element in the two-dimensional (2-D) resistive sensor array with the shared row-column fashion but it suffered from the crosstalk problem for the non-scanned elements' bypass currents, which were injected into array's non-scanned electrodes from zero potential. Firstly, for suppressing the crosstalk problem, we designed a novel improved zero potential circuit with one more op-amp in negative feedback to sample the total bypass current and calculate the precision resistance of the element being tested (EBT) with it. The improved setting non-scanned-electrode zero potential circuit (S-NSE-ZPC) was given as an example for analyzing and verifying the performance of the improved zero potential circuit. Secondly, in the S-NSE-ZPC and the improved S-NSE-ZPC, the effects of different parameters of the resistive sensor arrays and their readout circuits on the EBT's measurement accuracy were simulated with the NI Multisim 12. Thirdly, part features of the improved circuit were verified with the experiments of a prototype circuit. Followed, the results were discussed and the conclusions were given. The experiment results show that the improved circuit, though it requires one more op-amp, one more resistor and one more sampling channel, can access the EBT in the 2-D resistive sensor array more accurately.

  1. A statistical model for analyzing the rotational error of single isocenter for multiple targets technique.

    PubMed

    Chang, Jenghwa

    2017-06-01

    To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.

  2. Teleporting entanglements of cavity-field states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pires, Geisa; Baseia, B.; Almeida, N.G. de

    2004-08-01

    We present a scheme to teleport an entanglement of zero- and one-photon states from one cavity to another. The scheme, which has 100% success probability, relies on two perfect and identical bimodal cavities, a collection of two kinds of two-level atoms, a three-level atom in a ladder configuration driven by a classical field, Ramsey zones, and selective atomic-state detectors.

  3. Surface Impact Simulations of Helium Nanodroplets

    DTIC Science & Technology

    2015-06-30

    mechanical delocalization of the individual helium atoms in the droplet and the quan- tum statistical effects that accompany the interchange of identical...incorporates the effects of atomic delocaliza- tion by treating individual atoms as smeared-out probability distributions that move along classical...probability density distributions to give effec- tive interatomic potential energy curves that have zero-point averaging effects built into them [25

  4. Equilibrium Fluid Interface Behavior Under Low- and Zero-Gravity Conditions. 2

    NASA Technical Reports Server (NTRS)

    Concus, Paul; Finn, Robert

    1996-01-01

    The mathematical basis for the forthcoming Angular Liquid Bridge investigation on board Mir is described. Our mathematical work is based on the classical Young-Laplace-Gauss formulation for an equilibrium free surface of liquid partly filling a container or otherwise in contact with solid support surfaces. The anticipated liquid behavior used in the apparatus design is also illustrated.

  5. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  6. Quantum chaos for nonstandard symmetry classes in the Feingold-Peres model of coupled tops

    NASA Astrophysics Data System (ADS)

    Fan, Yiyun; Gnutzmann, Sven; Liang, Yuqi

    2017-12-01

    We consider two coupled quantum tops with angular momentum vectors L and M . The coupling Hamiltonian defines the Feingold-Peres model, which is a known paradigm of quantum chaos. We show that this model has a nonstandard symmetry with respect to the Altland-Zirnbauer tenfold symmetry classification of quantum systems, which extends the well-known threefold way of Wigner and Dyson (referred to as "standard" symmetry classes here). We identify the nonstandard symmetry classes BD I0 (chiral orthogonal class with no zero modes), BD I1 (chiral orthogonal class with one zero mode), and C I (antichiral orthogonal class) as well as the standard symmetry class A I (orthogonal class). We numerically analyze the specific spectral quantum signatures of chaos related to the nonstandard symmetries. In the microscopic density of states and in the distribution of the lowest positive energy eigenvalue, we show that the Feingold-Peres model follows the predictions of the Gaussian ensembles of random-matrix theory in the appropriate symmetry class if the corresponding classical dynamics is chaotic. In a crossover to mixed and near-integrable classical dynamics, we show that these signatures disappear or strongly change.

  7. Quantum chaos for nonstandard symmetry classes in the Feingold-Peres model of coupled tops.

    PubMed

    Fan, Yiyun; Gnutzmann, Sven; Liang, Yuqi

    2017-12-01

    We consider two coupled quantum tops with angular momentum vectors L and M. The coupling Hamiltonian defines the Feingold-Peres model, which is a known paradigm of quantum chaos. We show that this model has a nonstandard symmetry with respect to the Altland-Zirnbauer tenfold symmetry classification of quantum systems, which extends the well-known threefold way of Wigner and Dyson (referred to as "standard" symmetry classes here). We identify the nonstandard symmetry classes BDI_{0} (chiral orthogonal class with no zero modes), BDI_{1} (chiral orthogonal class with one zero mode), and CI (antichiral orthogonal class) as well as the standard symmetry class AI (orthogonal class). We numerically analyze the specific spectral quantum signatures of chaos related to the nonstandard symmetries. In the microscopic density of states and in the distribution of the lowest positive energy eigenvalue, we show that the Feingold-Peres model follows the predictions of the Gaussian ensembles of random-matrix theory in the appropriate symmetry class if the corresponding classical dynamics is chaotic. In a crossover to mixed and near-integrable classical dynamics, we show that these signatures disappear or strongly change.

  8. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  9. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  10. Zero-Energy Optical Logic: Can It Be Practical?

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    The thermodynamic “permission” to build a device that can evaluate a sequence of logic operations that operate at zero energy has existed for about 40 years. That is, physics allows it in principle. Conceptual solutions have been explored ever since then. A great number of important concepts were developed in so doing. Over the last four years, my colleagues and I have explored the possibility of a constructive proof. And we finally succeeded. Somewhat unexpectedly, we found such a proof and found that lossless logic systems could actually be built. And, as we had anticipated, it can only be implemented by optics. That raises a new question: Might an optical zero-energy logic system actually be good enough to displace electronic versions in some cases? In this paper, I do not even try to answer that question, but I do lay out some problems now blocking practical applications and show some promising approaches to solving them. The problems addressed are speed, size, and error rate. The anticipated speed problem simply vanishes, as it was an inference from the implicit assumption that the logic would be electronic. But the other two problems are real and must be addressed if energy-free logic is to have any significant applications. Initial steps in solving the size and error rate are addressed in more detail.

  11. Empirically Calibrated Asteroseismic Masses and Radii for Red Giants in the Kepler Fields

    NASA Astrophysics Data System (ADS)

    Pinsonneault, Marc; Elsworth, Yvonne; Silva Aguirre, Victor; Chaplin, William J.; Garcia, Rafael A.; Hekker, Saskia; Holtzman, Jon; Huber, Daniel; Johnson, Jennifer; Kallinger, Thomas; Mosser, Benoit; Mathur, Savita; Serenelli, Aldo; Shetrone, Matthew; Stello, Dennis; Tayar, Jamie; Zinn, Joel; APOGEE Team, KASC Team, APOKASC Team

    2018-01-01

    We report on the joint asteroseismic and spectroscopic properties of a sample of 6048 evolved stars in the fields originally observed by the Kepler satellite. We use APOGEE spectroscopic data taken from Data Release 13 of the Sloan Digital Sky Survey, combined with asteroseismic data analyzed by members of the Kepler Asteroseismic Science Consortium. With high statistical significance, the different pipelines do not have relative zero points that are the same as the solar values, and red clump stars do not have the same empirical relative zero points as red giants. We employ theoretically motivated corrections to the scaling relation for the large frequency spacing, and adjust the zero point of the frequency of maximum power scaling relation to be consistent with masses and radii for members of star clusters. The scatter in calibrator masses is consistent with our error estimation. Systematic and random mass errors are explicitly separated and identified. The measurement scatter, and random uncertainties, are three times larger for red giants where one or more technique failed to return a value than for targets where all five methods could do so, and this is a substantial fraction of the sample (20% of red giants and 25% of red clump stars). Overall trends and future prospects are discussed.

  12. The calibration of specular gloss meters and gloss plates

    NASA Astrophysics Data System (ADS)

    Li, Tiecheng; Lai, Lei; Yin, Dejin; Ji, Muyao; Lin, Fangsheng; Shi, Leibing; Xia, Ming; Fu, Yi

    2017-10-01

    Specular gloss is the perception by an observer of the mirror-like appearance of a surface. Specular gloss is usually measured by a glossmeter, which can be calibrated by a group of gloss plates according to JJG 696-2015. The characteristics of a gloss meter include stability, zero error, and error of indication. The characteristics of a gloss plate include roughness and spectral transmissivity of a high gloss plate, spectral reflectivity of a ceramic gloss plate. The experiment results indicate that calibration of both gloss meters and gloss plates should be carefully performed according to the latest verification regulation in order to reduce the measurement error.

  13. Entangled trajectories Hamiltonian dynamics for treating quantum nuclear effects

    NASA Astrophysics Data System (ADS)

    Smith, Brendan; Akimov, Alexey V.

    2018-04-01

    A simple and robust methodology, dubbed Entangled Trajectories Hamiltonian Dynamics (ETHD), is developed to capture quantum nuclear effects such as tunneling and zero-point energy through the coupling of multiple classical trajectories. The approach reformulates the classically mapped second-order Quantized Hamiltonian Dynamics (QHD-2) in terms of coupled classical trajectories. The method partially enforces the uncertainty principle and facilitates tunneling. The applicability of the method is demonstrated by studying the dynamics in symmetric double well and cubic metastable state potentials. The methodology is validated using exact quantum simulations and is compared to QHD-2. We illustrate its relationship to the rigorous Bohmian quantum potential approach, from which ETHD can be derived. Our simulations show a remarkable agreement of the ETHD calculation with the quantum results, suggesting that ETHD may be a simple and inexpensive way of including quantum nuclear effects in molecular dynamics simulations.

  14. The energy separation between the classical and nonclassical isomers of protonated acetylene - An extensive study in one- and n-particle space saturation

    NASA Technical Reports Server (NTRS)

    Lindh, Roland; Rice, Julia E.; Lee, Timothy J.

    1991-01-01

    The energy separation between the classical and nonclassical forms of protonated acetylene has been reinvestigated in light of the recent experimentally deduced lower bound to this value of 6.0 kcal/mol. The objective of the present study is to use state-of-the-art ab initio quantum mechanical methods to establish this energy difference to within chemical accuracy (i.e., about 1 kcal/mol). The one-particle basis sets include up to g-type functions and the electron correlation methods include single and double excitation coupled-cluster (CCSD), the CCSD(T) extension, multireference configuration interaction, and the averaged coupled-pair functional methods. A correction for zero-point vibrational energies has also been included, yielding a best estimate for the energy difference between the classical and nonclassical forms of 3.7 + or - 1.3 kcal/mol.

  15. Interpolating moving least-squares methods for fitting potential energy surfaces: using classical trajectories to explore configuration space.

    PubMed

    Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L

    2009-04-14

    We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.

  16. [The endpoint detection of cough signal in continuous speech].

    PubMed

    Yang, Guoqing; Mo, Hongqiang; Li, Wen; Lian, Lianfang; Zheng, Zeguang

    2010-06-01

    The endpoint detection of cough signal in continuous speech has been researched in order to improve the efficiency and veracity of manual recognition or computer-based automatic recognition. First, using the short time zero crossing ratio(ZCR) for identifying the suspicious coughs and getting the threshold of short time energy based on acoustic characteristics of cough. Then, the short time energy is combined with short time ZCR in order to implement the endpoint detection of cough in continuous speech. To evaluate the effect of the method, first, the virtual number of coughs in each recording was identified by two experienced doctors using the graphical user interface (GUI). Second, the recordings were analyzed by automatic endpoint detection program under Matlab7.0. Finally, the comparison between these two results showed: The error rate of undetected cough is 2.18%, and 98.13% of noise, silence and speech were removed. The way of setting short time energy threshold is robust. The endpoint detection program can remove most speech and noise, thus maintaining a lower rate of error.

  17. Event-triggered fault detection for a class of discrete-time linear systems using interval observers.

    PubMed

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-05-01

    This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Utilizing sensory prediction errors for movement intention decoding: A new methodology

    PubMed Central

    Nakamura, Keigo; Ando, Hideyuki

    2018-01-01

    We propose a new methodology for decoding movement intentions of humans. This methodology is motivated by the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions using so-called forward models. We propose to subliminally stimulate the sensory modality corresponding to a user’s intended movement, and decode a user’s movement intention from his electroencephalography (EEG), by decoding for prediction errors—whether the sensory prediction corresponding to a user’s intended movement matches the subliminal sensory stimulation we induce. We tested our proposal in a binary wheelchair turning task in which users thought of turning their wheelchair either left or right. We stimulated their vestibular system subliminally, toward either the left or the right direction, using a galvanic vestibular stimulator and show that the decoding for prediction errors from the EEG can radically improve movement intention decoding performance. We observed an 87.2% median single-trial decoding accuracy across tested participants, with zero user training, within 96 ms of the stimulation, and with no additional cognitive load on the users because the stimulation was subliminal. PMID:29750195

  19. Isobaric Reconstruction of the Baryonic Acoustic Oscillation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li

    2017-06-01

    In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.

  20. Design and demonstration of ultra-fast W-band photonic transmitter-mixer and detectors for 25 Gbits/sec error-free wireless linking.

    PubMed

    Chen, Nan-Wei; Shi, Jin-Wei; Tsai, Hsuan-Ju; Wun, Jhih-Min; Kuo, Fong-Ming; Hesler, Jeffery; Crowe, Thomas W; Bowers, John E

    2012-09-10

    A 25 Gbits/s error-free on-off-keying (OOK) wireless link between an ultra high-speed W-band photonic transmitter-mixer (PTM) and a fast W-band envelope detector is demonstrated. At the transmission end, the high-speed PTM is developed with an active near-ballistic uni-traveling carrier photodiode (NBUTC-PD) integrated with broadband front-end circuitry via the flip-chip bonding technique. Compared to our previous work, the wireless data rate is significantly increased through the improvement on the bandwidth of the front-end circuitry together with the reduction of the intermediate-frequency (IF) driving voltage of the active NBUTC-PD. The demonstrated PTM has a record-wide IF modulation (DC-25 GHz) and optical-to-electrical fractional bandwidths (68-128 GHz, ~67%). At the receiver end, the demodulation is realized with an ultra-fast W-band envelope detector built with a zero-bias Schottky barrier diode with a record wide video bandwidth (37 GHz) and excellent sensitivity. The demonstrated PTM is expected to find applications in multi-gigabit short-range wireless communication.

Top