Activation of zero-error classical capacity in low-dimensional quantum systems
NASA Astrophysics Data System (ADS)
Park, Jeonghoon; Heo, Jun
2018-06-01
Channel capacities of quantum channels can be nonadditive even if one of two quantum channels has no channel capacity. We call this phenomenon activation of the channel capacity. In this paper, we show that when we use a quantum channel on a qubit system, only a noiseless qubit channel can generate the activation of the zero-error classical capacity. In particular, we show that the zero-error classical capacity of two quantum channels on qubit systems cannot be activated. Furthermore, we present a class of examples showing the activation of the zero-error classical capacity in low-dimensional systems.
Role of memory errors in quantum repeaters
NASA Astrophysics Data System (ADS)
Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.
2007-03-01
We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timofeev, A. V.; Pomozov, D. I.; Makkaveev, A. P.
2007-05-15
Quantum cryptography systems combine two communication channels: a quantum and a classical one. (They can be physically implemented in the same fiber-optic link, which is employed as a quantum channel when one-photon states are transmitted and as a classical one when it carries classical data traffic.) Both channels are supposed to be insecure and accessible to an eavesdropper. Error correction in raw keys, interferometer balancing, and other procedures are performed by using the public classical channel. A discussion of the requirements to be met by the classical channel is presented.
Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels
NASA Astrophysics Data System (ADS)
Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis
2013-01-01
We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.
Zero Thermal Noise in Resistors at Zero Temperature
NASA Astrophysics Data System (ADS)
Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran
2016-06-01
The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.
Characterizing the performance of XOR games and the Shannon capacity of graphs.
Ramanathan, Ravishankar; Kay, Alastair; Murta, Gláucia; Horodecki, Paweł
2014-12-12
In this Letter we give a set of necessary and sufficient conditions such that quantum players of a two-party XOR game cannot perform any better than classical players. With any such game, we associate a graph and examine its zero-error communication capacity. This allows us to specify a broad new class of graphs for which the Shannon capacity can be calculated. The conditions also enable the parametrization of new families of games that have no quantum advantage for arbitrary input probability distributions, up to certain symmetries. In the future, these might be used in information-theoretic studies on reproducing the set of quantum nonlocal correlations.
Simultaneous classical communication and quantum key distribution using continuous variables*
NASA Astrophysics Data System (ADS)
Qi, Bing
2016-10-01
Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.
Errata report on Herbert Goldstein's Classical Mechanics: Second edition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.; Hoffman, F.M.
This report describes errors in Herbert Goldstein's textbook Classical Mechanics, Second Edition (Copyright 1980, ISBN 0-201-02918-9). Some of the errors in current printings of the text were corrected in the second printing; however, after communicating with Addison Wesley, the publisher for Classical Mechanics, it was discovered that the corrected galley proofs had been lost by the printer and that no one had complained of any errors in the eleven years since the second printing. The errata sheet corrects errors from all printings of the second edition.
Simultaneous classical communication and quantum key distribution using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing
Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less
Simultaneous classical communication and quantum key distribution using continuous variables
Qi, Bing
2016-10-26
Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less
Two-Way Communication with a Single Quantum Particle.
Del Santo, Flavio; Dakić, Borivoje
2018-02-09
In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.
Two-Way Communication with a Single Quantum Particle
NASA Astrophysics Data System (ADS)
Del Santo, Flavio; Dakić, Borivoje
2018-02-01
In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
Reconstruction of finite-valued sparse signals
NASA Astrophysics Data System (ADS)
Keiper, Sandra; Kutyniok, Gitta; Lee, Dae Gwan; Pfander, Götz
2017-08-01
The need of reconstructing discrete-valued sparse signals from few measurements, that is solving an undetermined system of linear equations, appears frequently in science and engineering. Those signals appear, for example, in error correcting codes as well as massive Multiple-Input Multiple-Output (MIMO) channel and wideband spectrum sensing. A particular example is given by wireless communications, where the transmitted signals are sequences of bits, i.e., with entries in f0; 1g. Whereas classical compressed sensing algorithms do not incorporate the additional knowledge of the discrete nature of the signal, classical lattice decoding approaches do not utilize sparsity constraints. In this talk, we present an approach that incorporates a discrete values prior into basis pursuit. In particular, we address finite-valued sparse signals, i.e., sparse signals with entries in a finite alphabet. We will introduce an equivalent null space characterization and show that phase transition takes place earlier than when using the classical basis pursuit approach. We will further discuss robustness of the algorithm and show that the nonnegative case is very different from the bipolar one. One of our findings is that the positioning of the zero in the alphabet - i.e., whether it is a boundary element or not - is crucial.
Secure Communication via a Recycling of Attenuated Classical Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, IV, Amos M.
We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.
Secure Communication via a Recycling of Attenuated Classical Signals
Smith, IV, Amos M.
2017-01-12
We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.
The calculation of average error probability in a digital fibre optical communication system
NASA Astrophysics Data System (ADS)
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
Experimental multiplexing of quantum key distribution with classical optical communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Liu-Jun; Chen, Luo-Kan; Ju, Lei
2015-02-23
We demonstrate the realization of quantum key distribution (QKD) when combined with classical optical communication, and synchronous signals within a single optical fiber. In the experiment, the classical communication sources use Fabry-Pérot (FP) lasers, which are implemented extensively in optical access networks. To perform QKD, multistage band-stop filtering techniques are developed, and a wavelength-division multiplexing scheme is designed for the multi-longitudinal-mode FP lasers. We have managed to maintain sufficient isolation among the quantum channel, the synchronous channel and the classical channels to guarantee good QKD performance. Finally, the quantum bit error rate remains below a level of 2% across themore » entire practical application range. The proposed multiplexing scheme can ensure low classical light loss, and enables QKD over fiber lengths of up to 45 km simultaneously when the fibers are populated with bidirectional FP laser communications. Our demonstration paves the way for application of QKD to current optical access networks, where FP lasers are widely used by the end users.« less
NASA Astrophysics Data System (ADS)
Montina, Alberto; Wolf, Stefan
2014-07-01
We consider the process consisting of preparation, transmission through a quantum channel, and subsequent measurement of quantum states. The communication complexity of the channel is the minimal amount of classical communication required for classically simulating it. Recently, we reduced the computation of this quantity to a convex minimization problem with linear constraints. Every solution of the constraints provides an upper bound on the communication complexity. In this paper, we derive the dual maximization problem of the original one. The feasible points of the dual constraints, which are inequalities, give lower bounds on the communication complexity, as illustrated with an example. The optimal values of the two problems turn out to be equal (zero duality gap). By this property, we provide necessary and sufficient conditions for optimality in terms of a set of equalities and inequalities. We use these conditions and two reasonable but unproven hypotheses to derive the lower bound n ×2n -1 for a noiseless quantum channel with capacity equal to n qubits. This lower bound can have interesting consequences in the context of the recent debate on the reality of the quantum state.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
High Throughput via Cross-Layer Interference Alignment for Mobile Ad Hoc Networks
2013-08-26
MIMO zero-forcing receiver in the presence of channel estimation error,” IEEE Transactions on Wireless Communications , vol. 6 , no. 3, pp. 805–810, Mar...Robert W. Heath, Nachiappan Valliappan. Antenna Subset Modulation for Secure Millimeter-Wave Wireless Communication , IEEE Transactions on...in MIMO Interference Alignment Networks, IEEE Transactions on Wireless Communications , (02 2012): 0. doi: 10.1109/TWC.2011.120511.111088 TOTAL: 2
Garashchuk, Sophya; Rassolov, Vitaly A
2008-07-14
Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.
Some conservative estimates in quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2006-08-15
Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
Scarani, Valerio; Acín, Antonio; Ribordy, Grégoire; Gisin, Nicolas
2004-02-06
We introduce a new class of quantum key distribution protocols, tailored to be robust against photon number splitting (PNS) attacks. We study one of these protocols, which differs from the original protocol by Bennett and Brassard (BB84) only in the classical sifting procedure. This protocol is provably better than BB84 against PNS attacks at zero error.
Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin
2016-09-20
The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.
Characterizing quantum channels with non-separable states of classical light
NASA Astrophysics Data System (ADS)
Ndagano, Bienvenu; Perez-Garcia, Benjamin; Roux, Filippus S.; McLaren, Melanie; Rosales-Guzman, Carmelo; Zhang, Yingwen; Mouane, Othmane; Hernandez-Aranda, Raul I.; Konrad, Thomas; Forbes, Andrew
2017-04-01
High-dimensional entanglement with spatial modes of light promises increased security and information capacity over quantum channels. Unfortunately, entanglement decays due to perturbations, corrupting quantum links that cannot be repaired without performing quantum tomography on the channel. Paradoxically, the channel tomography itself is not possible without a working link. Here we overcome this problem with a robust approach to characterize quantum channels by means of classical light. Using free-space communication in a turbulent atmosphere as an example, we show that the state evolution of classically entangled degrees of freedom is equivalent to that of quantum entangled photons, thus providing new physical insights into the notion of classical entanglement. The analysis of quantum channels by means of classical light in real time unravels stochastic dynamics in terms of pure state trajectories, and thus enables precise quantum error correction in short- and long-haul optical communication, in both free space and fibre.
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2017-04-01
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
Surface code quantum communication.
Fowler, Austin G; Wang, David S; Hill, Charles D; Ladd, Thaddeus D; Van Meter, Rodney; Hollenberg, Lloyd C L
2010-05-07
Quantum communication typically involves a linear chain of repeater stations, each capable of reliable local quantum computation and connected to their nearest neighbors by unreliable communication links. The communication rate of existing protocols is low as two-way classical communication is used. By using a surface code across the repeater chain and generating Bell pairs between neighboring stations with probability of heralded success greater than 0.65 and fidelity greater than 0.96, we show that two-way communication can be avoided and quantum information can be sent over arbitrary distances with arbitrarily low error at a rate limited only by the local gate speed. This is achieved by using the unreliable Bell pairs to measure nonlocal stabilizers and feeding heralded failure information into post-transmission error correction. Our scheme also applies when the probability of heralded success is arbitrarily low.
Three-step semiquantum secure direct communication protocol
NASA Astrophysics Data System (ADS)
Zou, XiangFu; Qiu, DaoWen
2014-09-01
Quantum secure direct communication is the direct communication of secret messages without need for establishing a shared secret key first. In the existing schemes, quantum secure direct communication is possible only when both parties are quantum. In this paper, we construct a three-step semiquantum secure direct communication (SQSDC) protocol based on single photon sources in which the sender Alice is classical. In a semiquantum protocol, a person is termed classical if he (she) can measure, prepare and send quantum states only with the fixed orthogonal quantum basis {|0>, |1>}. The security of the proposed SQSDC protocol is guaranteed by the complete robustness of semiquantum key distribution protocols and the unconditional security of classical one-time pad encryption. Therefore, the proposed SQSDC protocol is also completely robust. Complete robustness indicates that nonzero information acquired by an eavesdropper Eve on the secret message implies the nonzero probability that the legitimate participants can find errors on the bits tested by this protocol. In the proposed protocol, we suggest a method to check Eves disturbing in the doves returning phase such that Alice does not need to announce publicly any position or their coded bits value after the photons transmission is completed. Moreover, the proposed SQSDC protocol can be implemented with the existing techniques. Compared with many quantum secure direct communication protocols, the proposed SQSDC protocol has two merits: firstly the sender only needs classical capabilities; secondly to check Eves disturbing after the transmission of quantum states, no additional classical information is needed.
Asymptotic state discrimination and a strict hierarchy in distinguishability norms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chitambar, Eric; Hsieh, Min-Hsiu
2014-11-15
In this paper, we consider the problem of discriminating quantum states by local operations and classical communication (LOCC) when an arbitrarily small amount of error is permitted. This paradigm is known as asymptotic state discrimination, and we derive necessary conditions for when two multipartite states of any size can be discriminated perfectly by asymptotic LOCC. We use this new criterion to prove a gap in the LOCC and separable distinguishability norms. We then turn to the operational advantage of using two-way classical communication over one-way communication in LOCC processing. With a simple two-qubit product state ensemble, we demonstrate a strictmore » majorization of the two-way LOCC norm over the one-way norm.« less
Position-based coding and convex splitting for private communication over quantum channels
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2017-10-01
The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.
Automatic Locking of Laser Frequency to an Absorption Peak
NASA Technical Reports Server (NTRS)
Koch, Grady J.
2006-01-01
An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that constantly adjusts the frequency in an effort to drive the error to zero. When the laser frequency deviates from the midpeak value but remains within the locking range, the magnitude and sign of the error signal indicate the amount of detuning and the control circuitry adjusts the frequency by what it estimates to be the negative of this amount in an effort to bring the error to zero.
Post-processing procedure for industrial quantum key distribution systems
NASA Astrophysics Data System (ADS)
Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey
2016-08-01
We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.
Quantum fingerprinting with coherent states and a constant mean number of photons
NASA Astrophysics Data System (ADS)
Arrazola, Juan Miguel; Lütkenhaus, Norbert
2014-06-01
We present a protocol for quantum fingerprinting that is ready to be implemented with current technology and is robust to experimental errors. The basis of our scheme is an implementation of the signal states in terms of a coherent state in a superposition of time-bin modes. Experimentally, this requires only the ability to prepare coherent states of low amplitude and to interfere them in a balanced beam splitter. The states used in the protocol are arbitrarily close in trace distance to states of O (log2n) qubits, thus exhibiting an exponential separation in abstract communication complexity compared to the classical case. The protocol uses a number of optical modes that is proportional to the size n of the input bit strings but a total mean photon number that is constant and independent of n. Given the expended resources, our protocol achieves a task that is provably impossible using classical communication only. In fact, even in the presence of realistic experimental errors and loss, we show that there exist a large range of input sizes for which our quantum protocol transmits an amount of information that can be more than two orders of magnitude smaller than a classical fingerprinting protocol.
NASA Astrophysics Data System (ADS)
Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav
2016-09-01
In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
NASA Astrophysics Data System (ADS)
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
Moderate Deviation Analysis for Classical Communication over Quantum Channels
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco
2017-11-01
We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.
NASA Astrophysics Data System (ADS)
Salathé, Yves; Kurpiers, Philipp; Karg, Thomas; Lang, Christian; Andersen, Christian Kraglund; Akin, Abdulkadir; Krinner, Sebastian; Eichler, Christopher; Wallraff, Andreas
2018-03-01
Quantum computing architectures rely on classical electronics for control and readout. Employing classical electronics in a feedback loop with the quantum system allows us to stabilize states, correct errors, and realize specific feedforward-based quantum computing and communication schemes such as deterministic quantum teleportation. These feedback and feedforward operations are required to be fast compared to the coherence time of the quantum system to minimize the probability of errors. We present a field-programmable-gate-array-based digital signal processing system capable of real-time quadrature demodulation, a determination of the qubit state, and a generation of state-dependent feedback trigger signals. The feedback trigger is generated with a latency of 110 ns with respect to the timing of the analog input signal. We characterize the performance of the system for an active qubit initialization protocol based on the dispersive readout of a superconducting qubit and discuss potential applications in feedback and feedforward algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akibue, Seiseki; Murao, Mio
2014-12-04
We investigate distributed implementation of two-qubit unitary operations over two primitive networks, the butterfly network and the ladder network, as a first step to apply network coding for quantum computation. By classifying two-qubit unitary operations in terms of the Kraus-Cirac number, the number of non-zero parameters describing the global part of two-qubit unitary operations, we analyze which class of two-qubit unitary operations is implementable over these networks with free classical communication. For the butterfly network, we show that two classes of two-qubit unitary operations, which contain all Clifford, controlled-unitary and matchgate operations, are implementable over the network. For the laddermore » network, we show that two-qubit unitary operations are implementable over the network if and only if their Kraus-Cirac number do not exceed the number of the bridges of the ladder.« less
Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana
2016-01-01
Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725
Unbounded number of channel uses may be required to detect quantum capacity.
Cubitt, Toby; Elkouss, David; Matthews, William; Ozols, Maris; Pérez-García, David; Strelchuk, Sergii
2015-03-31
Transmitting data reliably over noisy communication channels is one of the most important applications of information theory, and is well understood for channels modelled by classical physics. However, when quantum effects are involved, we do not know how to compute channel capacities. This is because the formula for the quantum capacity involves maximizing the coherent information over an unbounded number of channel uses. In fact, entanglement across channel uses can even increase the coherent information from zero to non-zero. Here we study the number of channel uses necessary to detect positive coherent information. In all previous known examples, two channel uses already sufficed. It might be that only a finite number of channel uses is always sufficient. We show that this is not the case: for any number of uses, there are channels for which the coherent information is zero, but which nonetheless have capacity.
NASA Astrophysics Data System (ADS)
Prasitmeeboon, Pitcha
Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.
A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality
NASA Astrophysics Data System (ADS)
Cheung, KW; So, HC; Ma, W.-K.; Chan, YT
2006-12-01
The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.
The contrasting roles of Planck's constant in classical and quantum theories
NASA Astrophysics Data System (ADS)
Boyer, Timothy H.
2018-04-01
We trace the historical appearance of Planck's constant in physics, and we note that initially the constant did not appear in connection with quanta. Furthermore, we emphasize that Planck's constant can appear in both classical and quantum theories. In both theories, Planck's constant sets the scale of atomic phenomena. However, the roles played in the foundations of the theories are sharply different. In quantum theory, Planck's constant is crucial to the structure of the theory. On the other hand, in classical electrodynamics, Planck's constant is optional, since it appears only as the scale factor for the (homogeneous) source-free contribution to the general solution of Maxwell's equations. Since classical electrodynamics can be solved while taking the homogenous source-free contribution in the solution as zero or non-zero, there are naturally two different theories of classical electrodynamics, one in which Planck's constant is taken as zero and one where it is taken as non-zero. The textbooks of classical electromagnetism present only the version in which Planck's constant is taken to vanish.
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
Robust Timing Synchronization in Aeronautical Mobile Communication Systems
NASA Technical Reports Server (NTRS)
Xiong, Fu-Qin; Pinchak, Stanley
2004-01-01
This work details a study of robust synchronization schemes suitable for satellite to mobile aeronautical applications. A new scheme, the Modified Sliding Window Synchronizer (MSWS), is devised and compared with existing schemes, including the traditional Early-Late Gate Synchronizer (ELGS), the Gardner Zero-Crossing Detector (GZCD), and the Sliding Window Synchronizer (SWS). Performance of the synchronization schemes is evaluated by a set of metrics that indicate performance in digital communications systems. The metrics are convergence time, mean square phase error (or root mean-square phase error), lowest SNR for locking, initial frequency offset performance, midstream frequency offset performance, and system complexity. The performance of the synchronizers is evaluated by means of Matlab simulation models. A simulation platform is devised to model the satellite to mobile aeronautical channel, consisting of a Quadrature Phase Shift Keying modulator, an additive white Gaussian noise channel, and a demodulator front end. Simulation results show that the MSWS provides the most robust performance at the cost of system complexity. The GZCD provides a good tradeoff between robustness and system complexity for communication systems that require high symbol rates or low overall system costs. The ELGS has a high system complexity despite its average performance. Overall, the SWS, originally designed for multi-carrier systems, performs very poorly in single-carrier communications systems. Table 5.1 in Section 5 provides a ranking of each of the synchronization schemes in terms of the metrics set forth in Section 4.1. Details of comparison are given in Section 5. Based on the results presented in Table 5, it is safe to say that the most robust synchronization scheme examined in this work is the high-sample-rate Modified Sliding Window Synchronizer. A close second is its low-sample-rate cousin. The tradeoff between complexity and lowest mean-square phase error determines the rankings of the Gardner Zero-Crossing Detector and both versions of the Early-Late Gate Synchronizer. The least robust models are the high and low-sample-rate Sliding Window Synchronizers. Consequently, the recommended replacement synchronizer for NASA's Advanced Air Transportation Technologies mobile aeronautical communications system is the high-sample-rate Modified Sliding Window Synchronizer. By incorporating this synchronizer into their system, NASA can be assured that their system will be operational in extremely adverse conditions. The quick convergence time of the MSWS should allow the use of high-level protocols. However, if NASA feels that reduced system complexity is the most important aspect of their replacement synchronizer, the Gardner Zero-Crossing Detector would be the best choice.
Communication: Hypothetical ultralow-density ice polymorphs
NASA Astrophysics Data System (ADS)
Matsui, Takahiro; Hirata, Masanori; Yagasaki, Takuma; Matsumoto, Masakazu; Tanaka, Hideki
2017-09-01
More than 300 kinds of porous ice structures derived from zeolite frameworks and space fullerenes are examined using classical molecular dynamics simulations. It is found that a hypothetical zeolitic ice phase is less dense and more stable than the sparse ice structures reported by Huang et al. [Chem. Phys. Lett. 671, 186 (2017)]. In association with the zeolitic ice structure, even less dense structures, "aeroices," are proposed. It is found that aeroices are the most stable solid phases of water near the absolute zero temperature under negative pressure.
Quantum memory receiver for superadditive communication using binary coherent states
NASA Astrophysics Data System (ADS)
Klimek, Aleksandra; Jachura, Michał; Wasilewski, Wojciech; Banaszek, Konrad
2016-11-01
We propose a simple architecture based on multimode quantum memories for collective readout of classical information keyed using a pair coherent states, exemplified by the well-known binary phase shift keying format. Such a configuration enables demonstration of the superadditivity effect in classical communication over quantum channels, where the transmission rate becomes enhanced through joint detection applied to multiple channel uses. The proposed scheme relies on the recently introduced idea to prepare Hadamard sequences of input symbols that are mapped by a linear optical transformation onto the pulse position modulation format [Guha, S. Phys. Rev. Lett. 2011, 106, 240502]. We analyze two versions of readout based on direct detection and an optional Dolinar receiver which implements the minimum-error measurement for individual detection of a binary coherent state alphabet.
Quantum memory receiver for superadditive communication using binary coherent states.
Klimek, Aleksandra; Jachura, Michał; Wasilewski, Wojciech; Banaszek, Konrad
2016-11-12
We propose a simple architecture based on multimode quantum memories for collective readout of classical information keyed using a pair coherent states, exemplified by the well-known binary phase shift keying format. Such a configuration enables demonstration of the superadditivity effect in classical communication over quantum channels, where the transmission rate becomes enhanced through joint detection applied to multiple channel uses. The proposed scheme relies on the recently introduced idea to prepare Hadamard sequences of input symbols that are mapped by a linear optical transformation onto the pulse position modulation format [Guha, S. Phys. Rev. Lett. 2011 , 106 , 240502]. We analyze two versions of readout based on direct detection and an optional Dolinar receiver which implements the minimum-error measurement for individual detection of a binary coherent state alphabet.
Performance of convolutionally encoded noncoherent MFSK modem in fading channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1976-01-01
The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.
On-orbit assembly of a team of flexible spacecraft using potential field based method
NASA Astrophysics Data System (ADS)
Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping
2017-04-01
In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.
NASA Astrophysics Data System (ADS)
Liberal, Iñigo; Engheta, Nader
2018-02-01
Quantum emitters interacting through a waveguide setup have been proposed as a promising platform for basic research on light-matter interactions and quantum information processing. We propose to augment waveguide setups with the use of multiport devices. Specifically, we demonstrate theoretically the possibility of exciting N -qubit subradiant, maximally entangled, states with the use of suitably designed N -port devices. Our general methodology is then applied based on two different devices: an epsilon-and-mu-near-zero waveguide hub and a nonreciprocal circulator. A sensitivity analysis is carried out to assess the robustness of the system against a number of nonidealities. These findings link and merge the designs of devices for quantum state engineering with classical communication network methodologies.
Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate
NASA Astrophysics Data System (ADS)
Chau, H. F.
2002-12-01
A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.
New approach for identifying the zero-order fringe in variable wavelength interferometry
NASA Astrophysics Data System (ADS)
Galas, Jacek; Litwin, Dariusz; Daszkiewicz, Marek
2016-12-01
The family of VAWI techniques (for transmitted and reflected light) is especially efficient for characterizing objects, when in the interference system the optical path difference exceeds a few wavelengths. The classical approach that consists in measuring the deflection of interference fringes fails because of strong edge effects. Broken continuity of interference fringes prevents from correct identification of the zero order fringe, which leads to significant errors. The family of these methods has been proposed originally by Professor Pluta in the 1980s but that time image processing facilities and computers were hardly available. Automated devices unfold a completely new approach to the classical measurement procedures. The Institute team has taken that new opportunity and transformed the technique into fully automated measurement devices offering commercial readiness of industry-grade quality. The method itself has been modified and new solutions and algorithms simultaneously have extended the field of application. This has concerned both construction aspects of the systems and software development in context of creating computerized instruments. The VAWI collection of instruments constitutes now the core of the Institute commercial offer. It is now practically applicable in industrial environment for measuring textile and optical fibers, strips of thin films, testing of wave plates and nonlinear affects in different materials. This paper describes new algorithms for identifying the zero order fringe, which increases the performance of the system as a whole and presents some examples of measurements of optical elements.
Detecting incapacity of a quantum channel.
Smith, Graeme; Smolin, John A
2012-06-08
Using unreliable or noisy components for reliable communication requires error correction. But which noise processes can support information transmission, and which are too destructive? For classical systems any channel whose output depends on its input has the capacity for communication, but the situation is substantially more complicated in the quantum setting. We find a generic test for incapacity based on any suitable forbidden transformation--a protocol for communication with a channel passing our test would also allow one to implement the associated forbidden transformation. Our approach includes both known quantum incapacity tests--positive partial transposition and antidegradability (no cloning)--as special cases, putting them both on the same footing.
Toward simulating complex systems with quantum effects
NASA Astrophysics Data System (ADS)
Kenion-Hanrath, Rachel Lynn
Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.
Polar codes for achieving the classical capacity of a quantum channel
NASA Astrophysics Data System (ADS)
Guha, Saikat; Wilde, Mark
2012-02-01
We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.
A second generation 50 Mbps VLSI level zero processing system prototype
NASA Technical Reports Server (NTRS)
Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby
1994-01-01
Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.
NASA Astrophysics Data System (ADS)
Zhang, Lai-xian; Sun, Hua-yan; Zhao, Yan-zhong; Zheng, Yong-hui; Shan, Cong-miao
2013-08-01
Based on the cat-eye effect of optical system, free space optical communication based on cat-eye modulating retro-reflector can build communication link rapidly. Compared to classical free space optical communication system, system based on cat-eye modulating retro-reflector has great advantages such as building communication link more rapidly, a passive terminal is smaller, lighter and lower power consuming. The incident angle is an important factor of cat-eye effect, so it will affect the retro-reflecting communication link. In this paper, the principle and work flow of free space optical communication based on cat-eye modulating retro-reflector were introduced. Then, using the theory of geometric optics, the equivalent model of modulating retro-reflector with incidence angle was presented. The analytical solution of active area and retro-reflected light intensity of cat-eye modulating retro-reflector were given. Noise of PIN photodetector was analyzed, based on which, bit error rate of free space optical communication based on cat-eye modulating retro-reflector was presented. Finally, simulations were done to study the effect of incidence angle to the communication. The simulation results show that the incidence angle has little effect on active area and retro-reflected light intensity when the incidence beam is in the active field angle of cat-eye modulating retro-reflector. With certain system and condition, the communication link can rapidly be built when the incidence light beam is in the field angle, and the bit error rate increases greatly with link range. When link range is smaller than 35Km, the bit error rate is less than 10-16.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Zero-point energy constraint in quasi-classical trajectory calculations.
Xie, Zhen; Bowman, Joel M
2006-04-27
A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
Analytical minimization of synchronicity errors in stochastic identification
NASA Astrophysics Data System (ADS)
Bernal, D.
2018-01-01
An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.
Digital scrambling for shuttle communication links: Do drawbacks outweigh advantages?
NASA Technical Reports Server (NTRS)
Dessouky, K.
1985-01-01
Digital data scrambling has been considered for communication systems using NRZ (non-return to zero) symbol formats. The purpose is to increase the number of transitions in the data to improve the performance of the symbol synchronizer. This is accomplished without expanding the bandwidth but at the expense of increasing the data bit error rate (BER). Models for the scramblers/descramblers of practical interest are presented together with the appropriate link model. The effects of scrambling on the performance of coded and uncoded links are studied. The results are illustrated by application to the Tracking and Data Relay Satellite System links. Conclusions regarding the usefulness of scrambling are also given.
An integrated nonlinear optical loop mirror in silicon photonics for all-optical signal processing
NASA Astrophysics Data System (ADS)
Wang, Zifei; Glesk, Ivan; Chen, Lawrence R.
2018-02-01
The nonlinear optical loop mirror (NOLM) has been studied for several decades and has attracted considerable attention for applications in high data rate optical communications and all-optical signal processing. The majority of NOLM research has focused on silica fiber-based implementations. While various fiber designs have been considered to increase the nonlinearity and manage dispersion, several meters to hundreds of meters of fiber are still required. On the other hand, there is increasing interest in developing photonic integrated circuits for realizing signal processing functions. In this paper, we realize the first-ever passive integrated NOLM in silicon photonics and demonstrate its application for all-optical signal processing. In particular, we show wavelength conversion of 10 Gb/s return-to-zero on-off keying (RZ-OOK) signals over a wavelength range of 30 nm with error-free operation and a power penalty of less than 2.5 dB, we achieve error-free nonreturn to zero (NRZ)-to-RZ modulation format conversion at 10 Gb/s also with a power penalty of less than 2.8 dB, and we obtain error-free all-optical time-division demultiplexing of a 40 Gb/s RZ-OOK data signal into its 10 Gb/s tributary channels with a maximum power penalty of 3.5 dB.
Entanglement of Distillation for Lattice Gauge Theories.
Van Acoleyen, Karel; Bultinck, Nick; Haegeman, Jutho; Marien, Michael; Scholz, Volkher B; Verstraete, Frank
2016-09-23
We study the entanglement structure of lattice gauge theories from the local operational point of view, and, similar to Soni and Trivedi [J. High Energy Phys. 1 (2016) 1], we show that the usual entanglement entropy for a spatial bipartition can be written as the sum of an undistillable gauge part and of another part corresponding to the local operations and classical communication distillable entanglement, which is obtained by depolarizing the local superselection sectors. We demonstrate that the distillable entanglement is zero for pure Abelian gauge theories at zero gauge coupling, while it is in general nonzero for the non-Abelian case. We also consider gauge theories with matter, and show in a perturbative approach how area laws-including a topological correction-emerge for the distillable entanglement. Finally, we also discuss the entanglement entropy of gauge fixed states and show that it has no relation to the physical distillable entropy.
Local tuning of the order parameter in superconducting weak links: A zero-inductance nanodevice
NASA Astrophysics Data System (ADS)
Winik, Roni; Holzman, Itamar; Dalla Torre, Emanuele G.; Buks, Eyal; Ivry, Yachin
2018-03-01
Controlling both the amplitude and the phase of the superconducting quantum order parameter (" separators="|ψ ) in nanostructures is important for next-generation information and communication technologies. The lack of electric resistance in superconductors, which may be advantageous for some technologies, hinders convenient voltage-bias tuning and hence limits the tunability of ψ at the microscopic scale. Here, we demonstrate the local tunability of the phase and amplitude of ψ, obtained by patterning with a single lithography step a Nb nano-superconducting quantum interference device (nano-SQUID) that is biased at its nanobridges. We accompany our experimental results by a semi-classical linearized model that is valid for generic nano-SQUIDs with multiple ports and helps simplify the modelling of non-linear couplings among the Josephson junctions. Our design helped us reveal unusual electric characteristics with effective zero inductance, which is promising for nanoscale magnetic sensing and quantum technologies.
Beamforming Based Full-Duplex for Millimeter-Wave Communication
Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen
2016-01-01
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256
A game theory approach to target tracking in sensor networks.
Gu, Dongbing
2011-02-01
In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money... Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in error? We are not liable for any deposits of...
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Kopinga, K.; Delica, T.; Leschke, H.
1990-05-01
New results of a variant of the numerically exact quantum transfer matrix method have been compared with experimental data on the static properties of [C6H11NH3]CuBr3(CHAB), a ferromagnetic system with about 5% easy-plane anisotropy. Above T=3.5 K, the available data on the zero-field heat capacity, the excess heat capacity ΔC=C(B)-C(B=0), and the magnetization are described with an accuracy comparable to the experimental error. Calculations of the spin-spin correlation functions reveal that the good description of the experimental correlation length in CHAB by a classical spin model is largely accidental. The zero-field susceptibility, which can be deduced from these correlation functions, is in fair agreement with the reported experimental data between 4 and 100 K. The method also seems to yield accurate results for the chlorine isomorph, CHAC, a system with about 2% uniaxial anisotropy.
Parallel processing spacecraft communication system
NASA Technical Reports Server (NTRS)
Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)
1998-01-01
An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.
Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks
NASA Astrophysics Data System (ADS)
Shimizu, Kaoru; Imoto, Nobuyuki
2002-03-01
This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.
Generation of a non-zero discord bipartite state with classical second-order interference.
Choi, Yujun; Hong, Kang-Hee; Lim, Hyang-Tag; Yune, Jiwon; Kwon, Osung; Han, Sang-Wook; Oh, Kyunghwan; Kim, Yoon-Ho; Kim, Yong-Su; Moon, Sung
2017-02-06
We report an investigation on quantum discord in classical second-order interference. In particular, we theoretically show that a bipartite state with D = 0.311 of discord can be generated via classical second-order interference. We also experimentally verify the theory by obtaining D = 0.197 ± 0.060 of non-zero discord state. Together with the fact that the nonclassicalities originated from physical constraints and information theoretic perspectives are not equivalent, this result provides an insight to understand the nature of quantum discord.
Jin, Dafei; Lu, Ling; Wang, Zhong; Fang, Chen; Joannopoulos, John D.; Soljačić, Marin; Fu, Liang; Fang, Nicholas X.
2016-01-01
Classical wave fields are real-valued, ensuring the wave states at opposite frequencies and momenta to be inherently identical. Such a particle–hole symmetry can open up new possibilities for topological phenomena in classical systems. Here we show that the historically studied two-dimensional (2D) magnetoplasmon, which bears gapped bulk states and gapless one-way edge states near-zero frequency, is topologically analogous to the 2D topological p+ip superconductor with chiral Majorana edge states and zero modes. We further predict a new type of one-way edge magnetoplasmon at the interface of opposite magnetic domains, and demonstrate the existence of zero-frequency modes bounded at the peripheries of a hollow disk. These findings can be readily verified in experiment, and can greatly enrich the topological phases in bosonic and classical systems. PMID:27892453
Jin, Dafei; Lu, Ling; Wang, Zhong; ...
2016-11-28
Classical wave fields are real-valued, ensuring the wave states at opposite frequencies and momenta to be inherently identical. Such a particle–hole symmetry can open up new possibilities for topological phenomena in classical systems. Here we show that the historically studied two-dimensional (2D) magnetoplasmon, which bears gapped bulk states and gapless one-way edge states near-zero frequency, is topologically analogous to the 2D topological p+ip superconductor with chiral Majorana edge states and zero modes. We further predict a new type of one-way edge magnetoplasmon at the interface of opposite magnetic domains, and demonstrate the existence of zero-frequency modes bounded at the peripheriesmore » of a hollow disk. Finally, these findings can be readily verified in experiment, and can greatly enrich the topological phases in bosonic and classical systems.« less
An entangled-LED-driven quantum relay over 1 km
NASA Astrophysics Data System (ADS)
Varnava, Christiana; Stevenson, R. Mark; Nilsson, Jonas; Skiba-Szymanska, Joanna; Dzurňák, Branislav; Lucamarini, Marco; Penty, Richard V.; Farrer, Ian; Ritchie, David A.; Shields, Andrew J.
2016-03-01
Quantum cryptography allows confidential information to be communicated between two parties, with secrecy guaranteed by the laws of nature alone. However, upholding guaranteed secrecy over networks poses a further challenge, as classical receive-and-resend routing nodes can only be used conditional of trust by the communicating parties, which arguably diminishes the value of the underlying quantum cryptography. Quantum relays offer a potential solution by teleporting qubits from a sender to a receiver, without demanding additional trust from end users. Here we demonstrate the operation of a quantum relay over 1 km of optical fibre, which teleports a sequence of photonic quantum bits to a receiver by utilising entangled photons emitted by a semiconductor light-emitting diode. The average relay fidelity of the link is 0.90±0.03, exceeding the classical bound of 0.75 for the set of states used, and sufficiently high to allow error correction. The fundamentally low multiphoton emission statistics and the integration potential of the source present an appealing platform for future quantum networks.
OptoRadio: a method of wireless communication using orthogonal M-ary PSK (OMPSK) modulation
NASA Astrophysics Data System (ADS)
Gaire, Sunil Kumar; Faruque, Saleh; Ahamed, Md. Maruf
2016-09-01
Laser based radio communication system, i.e. OptoRadio, using Orthogonal M-ary PSK Modulation scheme is presented in this paper. In this scheme, when a block of data needs to be transmitted, the corresponding block of biorthogonal code is transmitted by means of multi-phase shift keying. At the receiver, two photo diodes are cross coupled. The effect is that the net output power due to ambient light is close to zero. The laser signal is then transmitted only into one of the receivers. With all other signals being cancelled out, the laser signal is an overwhelmingly dominant signal. The detailed design, bit error correction capabilities, and bandwidth efficiency are presented to illustrate the concept.
Regalado, Carlos M; Ritter, Axel
2007-08-01
Calibration of the Granier thermal dissipation technique for measuring stem sap flow in trees requires determination of the temperature difference (DeltaT) between a heated and an unheated probe when sap flow is zero (DeltaT(max)). Classically, DeltaT(max) has been estimated from the maximum predawn DeltaT, assuming that sap flow is negligible at nighttime. However, because sap flow may continue during the night, the maximum predawn DeltaT value may underestimate the true DeltaT(max). No alternative method has yet been proposed to estimate DeltaT(max) when sap flow is non-zero at night. A sensitivity analysis is presented showing that errors in DeltaT(max) may amplify through sap flux density computations in Granier's approach, such that small amounts of undetected nighttime sap flow may lead to large diurnal sap flux density errors, hence the need for a correct estimate of DeltaT(max). By rearranging Granier's original formula, an optimization method to compute DeltaT(max) from simultaneous measurements of diurnal DeltaT and micrometeorological variables, without assuming that sap flow is negligible at night, is presented. Some illustrative examples are shown for sap flow measurements carried out on individuals of Erica arborea L., which has needle-like leaves, and Myrica faya Ait., a broadleaf species. We show that, although DeltaT(max) values obtained by the proposed method may be similar in some instances to the DeltaT(max) predicted at night, in general the values differ. The procedure presented has the potential of being applied not only to Granier's method, but to other heat-based sap flow systems that require a zero flow calibration, such as the Cermák et al. (1973) heat balance method and the T-max heat pulse system of Green et al. (2003).
ERIC Educational Resources Information Center
Boyer, Timothy H.
1985-01-01
The classical vacuum of physics is not empty, but contains a distinctive pattern of electromagnetic fields. Discovery of the vacuum, thermal spectrum, classical electron theory, zero-point spectrum, and effects of acceleration are discussed. Connection between thermal radiation and the classical vacuum reveals unexpected unity in the laws of…
Broc, Lucie; Bernicot, Josie; Olive, Thierry; Favart, Monik; Reilly, Judy; Quémart, Pauline; Uzé, Joël
2013-10-01
The goal of this study was to compare the lexical spelling performance of children and adolescents with specific language impairment (SLI) in two contrasting writing situations: a dictation of isolated words (a classic evaluative situation) and a narrative of a personal event (a communicative situation). Twenty-four children with SLI and 48 typically developing children participated in the study, split into two age groups: 7-11 and 12-18 years of age. Although participants with SLI made more spelling errors per word than typically developing participants of the same chronological age, there was a smaller difference between the two groups in the narratives than in the dictations. Two of the findings are particularly noteworthy: (1) Between 12 and 18 years of age, in communicative narration, the number of spelling errors of the SLI group was not different from that of the typically developing group. (2) In communicative narration, the participants with SLI did not make specific spelling errors (phonologically unacceptable), contrary to what was shown in the dictation. From an educational perspective or that of a remediation program, it must be stressed that the communicative narration provides children-and especially adolescents-with SLI an opportunity to demonstrate their improved lexical spelling abilities. Furthermore, the results encourage long-term lexical spelling education, as adolescents with SLI continue to show improvement between 12 and 18 years of age. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dür, Wolfgang; Lamprecht, Raphael; Heusler, Stefan
2017-07-01
A long-range quantum communication network is among the most promising applications of emerging quantum technologies. We discuss the potential of such a quantum internet for the secure transmission of classical and quantum information, as well as theoretical and experimental approaches and recent advances to realize them. We illustrate the involved concepts such as error correction, teleportation or quantum repeaters and consider an approach to this topic based on catchy visualizations as a context-based, modern treatment of quantum theory at high school.
High-dimensional free-space optical communications based on orbital angular momentum coding
NASA Astrophysics Data System (ADS)
Zou, Li; Gu, Xiaofan; Wang, Le
2018-03-01
In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.
Kis, Anna; Gácsi, Márta; Range, Friederike; Virányi, Zsófia
2012-01-01
In this paper, we describe a behaviour pattern similar to the "A-not-B" error found in human infants and young apes in a monkey species, the common marmosets (Callithrix jacchus). In contrast to the classical explanation, recently it has been suggested that the "A-not-B" error committed by human infants is at least partially due to misinterpretation of the hider's ostensively communicated object hiding actions as potential 'teaching' demonstrations during the A trials. We tested whether this so-called Natural Pedagogy hypothesis would account for the A-not-B error that marmosets commit in a standard object permanence task, but found no support for the hypothesis in this species. Alternatively, we present evidence that lower level mechanisms, such as attention and motivation, play an important role in committing the "A-not-B" error in marmosets. We argue that these simple mechanisms might contribute to the effect of undeveloped object representational skills in other species including young non-human primates that commit the A-not-B error.
A Reliable Wireless Control System for Tomato Hydroponics
Ibayashi, Hirofumi; Kaneda, Yukimasa; Imahara, Jungo; Oishi, Naoki; Kuroda, Masahiro; Mineno, Hiroshi
2016-01-01
Agricultural systems using advanced information and communication (ICT) technology can produce high-quality crops in a stable environment while decreasing the need for manual labor. The system collects a wide variety of environmental data and provides the precise cultivation control needed to produce high value-added crops; however, there are the problems of packet transmission errors in wireless sensor networks or system failure due to having the equipment in a hot and humid environment. In this paper, we propose a reliable wireless control system for hydroponic tomato cultivation using the 400 MHz wireless band and the IEEE 802.15.6 standard. The 400 MHz band, which is lower than the 2.4 GHz band, has good obstacle diffraction, and zero-data-loss communication is realized using the guaranteed time-slot method supported by the IEEE 802.15.6 standard. In addition, this system has fault tolerance and a self-healing function to recover from faults such as packet transmission failures due to deterioration of the wireless communication quality. In our basic experiments, the 400 MHz band wireless communication was not affected by the plants’ growth, and the packet error rate was less than that of the 2.4 GHz band. In summary, we achieved a real-time hydroponic liquid supply control with no data loss by applying a 400 MHz band WSN to hydroponic tomato cultivation. PMID:27164105
A Reliable Wireless Control System for Tomato Hydroponics.
Ibayashi, Hirofumi; Kaneda, Yukimasa; Imahara, Jungo; Oishi, Naoki; Kuroda, Masahiro; Mineno, Hiroshi
2016-05-05
Agricultural systems using advanced information and communication (ICT) technology can produce high-quality crops in a stable environment while decreasing the need for manual labor. The system collects a wide variety of environmental data and provides the precise cultivation control needed to produce high value-added crops; however, there are the problems of packet transmission errors in wireless sensor networks or system failure due to having the equipment in a hot and humid environment. In this paper, we propose a reliable wireless control system for hydroponic tomato cultivation using the 400 MHz wireless band and the IEEE 802.15.6 standard. The 400 MHz band, which is lower than the 2.4 GHz band, has good obstacle diffraction, and zero-data-loss communication is realized using the guaranteed time-slot method supported by the IEEE 802.15.6 standard. In addition, this system has fault tolerance and a self-healing function to recover from faults such as packet transmission failures due to deterioration of the wireless communication quality. In our basic experiments, the 400 MHz band wireless communication was not affected by the plants' growth, and the packet error rate was less than that of the 2.4 GHz band. In summary, we achieved a real-time hydroponic liquid supply control with no data loss by applying a 400 MHz band WSN to hydroponic tomato cultivation.
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping
2017-12-01
In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
Product-State Approximations to Quantum States
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Harrow, Aram W.
2016-02-01
We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.
Epistemic View of Quantum States and Communication Complexity of Quantum Channels
NASA Astrophysics Data System (ADS)
Montina, Alberto
2012-09-01
The communication complexity of a quantum channel is the minimal amount of classical communication required for classically simulating a process of state preparation, transmission through the channel and subsequent measurement. It establishes a limit on the power of quantum communication in terms of classical resources. We show that classical simulations employing a finite amount of communication can be derived from a special class of hidden variable theories where quantum states represent statistical knowledge about the classical state and not an element of reality. This special class has attracted strong interest very recently. The communication cost of each derived simulation is given by the mutual information between the quantum state and the classical state of the parent hidden variable theory. Finally, we find that the communication complexity for single qubits is smaller than 1.28 bits. The previous known upper bound was 1.85 bits.
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
Signals of Opportunity Navigation Using Wi-Fi Signals
2011-03-24
Identifier . . . . . . . . . . . . . . . . . . . . . . . 54 MVM Mean Value Method . . . . . . . . . . . . . . . . . . . . . 60 SDM Scaled Differential...the mean value ( MVM ) and scaled differential (SDM) methods. An error was logged if the UI 60 correlation algorithm identified a packet index that did...Notable from this graph is that a window of 50 packets appears to provide zero errors for MVM and near zero errors for SDM. Also notable is that a
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
Polarization errors associated with birefringent waveplates
NASA Technical Reports Server (NTRS)
West, Edward A.; Smith, Matthew H.
1995-01-01
Although zero-order quartz waveplates are widely used in instrumentation that needs good temperature and field-of-view characteristics, the residual errors associated with these devices can be very important in high-resolution polarimetry measurements. How the field-of-view characteristics are affected by retardation errors and the misalignment of optic axes in a double-crystal waveplate is discussed. The retardation measurements made on zero-order quartz and single-order 'achromatic' waveplates and how the misalignment errors affect those measurements are discussed.
Trajectory-based understanding of the quantum-classical transition for barrier scattering
NASA Astrophysics Data System (ADS)
Chou, Chia-Chun
2018-06-01
The quantum-classical transition of wave packet barrier scattering is investigated using a hydrodynamic description in the framework of a nonlinear Schrödinger equation. The nonlinear equation provides a continuous description for the quantum-classical transition of physical systems by introducing a degree of quantumness. Based on the transition equation, the transition trajectory formalism is developed to establish the connection between classical and quantum trajectories. The quantum-classical transition is then analyzed for the scattering of a Gaussian wave packet from an Eckart barrier and the decay of a metastable state. Computational results for the evolution of the wave packet and the transmission probabilities indicate that classical results are recovered when the degree of quantumness tends to zero. Classical trajectories are in excellent agreement with the transition trajectories in the classical limit, except in some regions where transition trajectories cannot cross because of the single-valuedness of the transition wave function. As the computational results demonstrate, the process that the Planck constant tends to zero is equivalent to the gradual removal of quantum effects originating from the quantum potential. This study provides an insightful trajectory interpretation for the quantum-classical transition of wave packet barrier scattering.
33 CFR 154.2181 - Alternative testing program-Test requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CE test must check the calibrated range of each analyzer using a lower (zero) and upper (span... instrument, R = reference value of zero or high-level calibration gas introduced into the monitoring system... Difference Zero Span 1-Zero 1-Span 2-Zero 2-Span 3-Zero 3-Span Mean Difference = Calibration Error = % % (3...
Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.
Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro
2018-05-28
In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.
Classical and quantum communication without a shared reference frame.
Bartlett, Stephen D; Rudolph, Terry; Spekkens, Robert W
2003-07-11
We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame at a rate that asymptotically approaches one classical bit or one encoded qubit per transmitted qubit. We present an optical scheme to communicate classical bits without a shared reference frame using entangled photon pairs and linear optical Bell state measurements.
Beating the classical limits of information transmission using a quantum decoder
NASA Astrophysics Data System (ADS)
Chapman, Robert J.; Karim, Akib; Huang, Zixin; Flammia, Steven T.; Tomamichel, Marco; Peruzzo, Alberto
2018-01-01
Encoding schemes and error-correcting codes are widely used in information technology to improve the reliability of data transmission over real-world communication channels. Quantum information protocols can further enhance the performance in data transmission by encoding a message in quantum states; however, most proposals to date have focused on the regime of a large number of uses of the noisy channel, which is unfeasible with current quantum technology. We experimentally demonstrate quantum enhanced communication over an amplitude damping noisy channel with only two uses of the channel per bit and a single entangling gate at the decoder. By simulating the channel using a photonic interferometric setup, we experimentally increase the reliability of transmitting a data bit by greater than 20 % for a certain damping range over classically sending the message twice. We show how our methodology can be extended to larger systems by simulating the transmission of a single bit with up to eight uses of the channel and a two-bit message with three uses of the channel, predicting a quantum enhancement in all cases.
Analyzing communication errors in an air medical transport service.
Dalto, Joseph D; Weir, Charlene; Thomas, Frank
2013-01-01
Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Farouk, Ahmed; Batle, J.; Elhoseny, M.; Naseri, Mosayeb; Lone, Muzaffar; Fedorov, Alex; Alkhambashi, Majid; Ahmed, Syed Hassan; Abdel-Aty, M.
2018-04-01
Quantum communication provides an enormous advantage over its classical counterpart: security of communications based on the very principles of quantum mechanics. Researchers have proposed several approaches for user identity authentication via entanglement. Unfortunately, these protocols fail because an attacker can capture some of the particles in a transmitted sequence and send what is left to the receiver through a quantum channel. Subsequently, the attacker can restore some of the confidential messages, giving rise to the possibility of information leakage. Here we present a new robust General N user authentication protocol based on N-particle Greenberger-Horne-Zeilinger (GHZ) states, which makes eavesdropping detection more effective and secure, as compared to some current authentication protocols. The security analysis of our protocol for various kinds of attacks verifies that it is unconditionally secure, and that an attacker will not obtain any information about the transmitted key. Moreover, as the number of transferred key bits N becomes larger, while the number of users for transmitting the information is increased, the probability of effectively obtaining the transmitted authentication keys is reduced to zero.
Ren, Yongxiong; Liu, Cong; Pang, Kai; Zhao, Jiapeng; Cao, Yinwen; Xie, Guodong; Li, Long; Liao, Peicheng; Zhao, Zhe; Tur, Moshe; Boyd, Robert W; Willner, Alan E
2017-12-01
We experimentally demonstrate spatial multiplexing of an orbital angular momentum (OAM)-encoded quantum channel and a classical Gaussian beam with a different wavelength and orthogonal polarization. Data rates as large as 100 MHz are achieved by encoding on two different OAM states by employing a combination of independently modulated laser diodes and helical phase holograms. The influence of OAM mode spacing, encoding bandwidth, and interference from the co-propagating Gaussian beam on registered photon count rates and quantum bit error rates is investigated. Our results show that the deleterious effects of intermodal crosstalk effects on system performance become less important for OAM mode spacing Δ≥2 (corresponding to a crosstalk value of less than -18.5 dB). The use of OAM domain can additionally offer at least 10.4 dB isolation besides that provided by wavelength and polarization, leading to a further suppression of interference from the classical channel.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Smith, Everett V., Jr.
2007-01-01
Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…
The Zero Product Principle Error.
ERIC Educational Resources Information Center
Padula, Janice
1996-01-01
Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)
Security of coherent-state quantum cryptography in the presence of Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heid, Matthias; Luetkenhaus, Norbert
2007-08-15
We investigate the security against collective attacks of a continuous variable quantum key distribution scheme in the asymptotic key limit for a realistic setting. The quantum channel connecting the two honest parties is assumed to be lossy and imposes Gaussian noise on the observed quadrature distributions. Secret key rates are given for direct and reverse reconciliation schemes including post-selection in the collective attack scenario. The effect of a nonideal error correction and two-way communication in the classical post-processing step is also taken into account.
Effect of Slice Error of Glass on Zero Offset of Capacitive Accelerometer
NASA Astrophysics Data System (ADS)
Hao, R.; Yu, H. J.; Zhou, W.; Peng, B.; Guo, J.
2018-03-01
Packaging process had been studied on capacitance accelerometer. The silicon-glass bonding process had been adopted on sensor chip and glass, and sensor chip and glass was adhered on ceramic substrate, the three-layer structure was curved due to the thermal mismatch, the slice error of glass lead to asymmetrical curve of sensor chip. Thus, the sensitive mass of accelerometer deviated along the sensitive direction, which was caused in zero offset drift. It was meaningful to confirm the influence of slice error of glass, the simulation results showed that the zero output drift was 12.3×10-3 m/s2 when the deviation was 40μm.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case
Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique
2017-01-01
This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144
Tsai, Cheng-Yu; Jiang, Jhih-Shan
2018-01-01
A micro-projection enabled short-range communication (SRC) approach using red-, green- and blue-based light-emitting diodes (RGB-LEDs) has experimentally demonstrated recently that micro-projection and high-speed data transmission can be performed simultaneously. In this research, a reconfigurable design of a polarization modulated image system based on the use of a Liquid Crystal on Silicon based Spatial Light Modulator (LCoS-based SLM) serving as a portable optical terminal capable of micro-projection and bidirectional multi-wavelength communications is proposed and experimentally demonstrated. For the proof of concept, the system performance was evaluated through a bidirectional communication link at a transmission distance over 0.65 m. In order to make the proposed communication system architecture compatible with the data modulation format of future possible wireless communication system, baseband modulation scheme, i.e., Non-Return-to-Zero On-Off-Keying (NRZ_OOK), M-ary Phase Shift Keying (M-PSK) and M-ary Quadrature Amplitude Modulation (M-QAM) were used to investigate the system transmission performance. The experimental results shown that an acceptable BER (satisfying the limitation of Forward Error Correction, FEC standard) and crosstalk can all be achieved in the bidirectional multi-wavelength communication scenario. PMID:29587457
NASA Astrophysics Data System (ADS)
Mata-Machuca, Juan L.; Aguilar-López, Ricardo
2018-01-01
This work deals with the adaptative synchronization of complex dynamical networks with fractional-order nodes and its application in secure communications employing chaotic parameter modulation. The complex network is composed of multiple fractional-order systems with mismatch parameters and the coupling functions are given to realize the network synchronization. We introduce a fractional algebraic synchronizability condition (FASC) and a fractional algebraic identifiability condition (FAIC) which are used to know if the synchronization and parameters estimation problems can be solved. To overcome these problems, an adaptative synchronization methodology is designed; the strategy consists in proposing multiple receiver systems which tend to follow asymptotically the uncertain transmitters systems. The coupling functions and parameters of the receiver systems are adjusted continually according to a convenient sigmoid-like adaptative controller (SLAC), until the measurable output errors converge to zero, hence, synchronization between transmitter and receivers is achieved and message signals are recovered. Indeed, the stability analysis of the synchronization error is based on the fractional Lyapunov direct method. Finally, numerical results corroborate the satisfactory performance of the proposed scheme by means of the synchronization of a complex network consisting of several fractional-order unified chaotic systems.
NASA Astrophysics Data System (ADS)
Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.
2017-11-01
In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.
Fixing Stellarator Magnetic Surfaces
NASA Astrophysics Data System (ADS)
Hanson, James D.
1999-11-01
Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.
A full-duplex optical access system with hybrid 64/16/4QAM-OFDM downlink
NASA Astrophysics Data System (ADS)
He, Chao; Tan, Ze-fu; Shao, Yu-feng; Cai, Li; Pu, He-sheng; Zhu, Yun-le; Huang, Si-si; Liu, Yu
2016-09-01
A full-duplex optical passive access scheme is proposed and verified by simulation, in which hybrid 64/16/4-quadrature amplitude modulation (64/16/4QAM) orthogonal frequency division multiplexing (OFDM) optical signal is for downstream transmission and non-return-to-zero (NRZ) optical signal is for upstream transmission. In view of the transmitting and receiving process for downlink optical signal, in-phase/quadrature-phase (I/Q) modulation based on Mach-Zehnder modulator (MZM) and homodyne coherent detection technology are employed, respectively. The simulation results show that the bit error ratio ( BER) less than hardware decision forward error correction (HD-FEC) threshold is successfully obtained over transmission path with 20-km-long standard single mode fiber (SSMF) for hybrid downlink modulation OFDM optical signal. In addition, by dividing the system bandwidth into several subchannels consisting of some continuous subcarriers, it is convenient for users to select different channels depending on requirements of communication.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Quantum and classical ripples in graphene
NASA Astrophysics Data System (ADS)
Hašík, Juraj; Tosatti, Erio; MartoÅák, Roman
2018-04-01
Thermal ripples of graphene are well understood at room temperature, but their quantum counterparts at low temperatures are in need of a realistic quantitative description. Here we present atomistic path-integral Monte Carlo simulations of freestanding graphene, which show upon cooling a striking classical-quantum evolution of height and angular fluctuations. The crossover takes place at ever-decreasing temperatures for ever-increasing wavelengths so that a completely quantum regime is never attained. Zero-temperature quantum graphene is flatter and smoother than classical graphene at large scales yet rougher at short scales. The angular fluctuation distribution of the normals can be quantitatively described by coexistence of two Gaussians, one classical strongly T -dependent and one quantum about 2° wide, of zero-point character. The quantum evolution of ripple-induced height and angular spread should be observable in electron diffraction in graphene and other two-dimensional materials, such as MoS2, bilayer graphene, boron nitride, etc.
NASA Astrophysics Data System (ADS)
Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.
2016-06-01
This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
High-sensitivity DPSK receiver for high-bandwidth free-space optical communication links.
Juarez, Juan C; Young, David W; Sluz, Joseph E; Stotts, Larry B
2011-05-23
A high-sensitivity modem and high-dynamic range optical automatic gain controller (OAGC) have been developed to provide maximum link margin and to overcome the dynamic nature of free-space optical links. A sensitivity of -48.9 dBm (10 photons per bit) at 10 Gbps was achieved employing a return-to-zero differential phase shift keying based modem and a commercial Reed-Solomon forward error correction system. Low-noise optical gain was provided by an OAGC with a noise figure of 4.1 dB (including system required input loses) and a dynamic range of greater than 60 dB.
An alternative method for centrifugal compressor loading factor modelling
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Zero in the brain: A voxel-based lesion-symptom mapping study in right hemisphere damaged patients.
Benavides-Varela, Silvia; Passarini, Laura; Butterworth, Brian; Rolma, Giuseppe; Burgio, Francesca; Pitteri, Marco; Meneghello, Francesca; Shallice, Tim; Semenza, Carlo
2016-04-01
Transcoding numerals containing zero is more problematic than transcoding numbers formed by non-zero digits. However, it is currently unknown whether this is due to zeros requiring brain areas other than those traditionally associated with number representation. Here we hypothesize that transcoding zeros entails visuo-spatial and integrative processes typically associated with the right hemisphere. The investigation involved 22 right-brain-damaged patients and 20 healthy controls who completed tests of reading and writing Arabic numbers. As expected, the most significant deficit among patients involved a failure to cope with zeros. Moreover, a voxel-based lesion-symptom mapping (VLSM) analysis showed that the most common zero-errors were maximally associated to the right insula which was previously related to sensorimotor integration, attention, and response selection, yet for the first time linked to transcoding processes. Error categories involving other digits corresponded to the so-called Neglect errors, which however, constituted only about 10% of the total reading and 3% of the writing mistakes made by the patients. We argue that damage to the right hemisphere impairs the mechanism of parsing, and the ability to set-up empty-slot structures required for processing zeros in complex numbers; moreover, we suggest that the brain areas located in proximity to the right insula play a role in the integration of the information resulting from the temporary application of transcoding procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.
2006-12-01
We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.
Classical noise, quantum noise and secure communication
NASA Astrophysics Data System (ADS)
Tannous, C.; Langlois, J.
2016-01-01
Secure communication based on message encryption might be performed by combining the message with controlled noise (called pseudo-noise) as performed in spread-spectrum communication used presently in Wi-Fi and smartphone telecommunication systems. Quantum communication based on entanglement is another route for securing communications as demonstrated by several important experiments described in this work. The central role played by the photon in unifying the description of classical and quantum noise as major ingredients of secure communication systems is highlighted and described on the basis of the classical and quantum fluctuation dissipation theorems.
Classical many-particle systems with unique disordered ground states
NASA Astrophysics Data System (ADS)
Zhang, G.; Stillinger, F. H.; Torquato, S.
2017-10-01
Classical ground states (global energy-minimizing configurations) of many-particle systems are typically unique crystalline structures, implying zero enumeration entropy of distinct patterns (aside from trivial symmetry operations). By contrast, the few previously known disordered classical ground states of many-particle systems are all high-entropy (highly degenerate) states. Here we show computationally that our recently proposed "perfect-glass" many-particle model [Sci. Rep. 6, 36963 (2016), 10.1038/srep36963] possesses disordered classical ground states with a zero entropy: a highly counterintuitive situation . For all of the system sizes, parameters, and space dimensions that we have numerically investigated, the disordered ground states are unique such that they can always be superposed onto each other or their mirror image. At low energies, the density of states obtained from simulations matches those calculated from the harmonic approximation near a single ground state, further confirming ground-state uniqueness. Our discovery provides singular examples in which entropy and disorder are at odds with one another. The zero-entropy ground states provide a unique perspective on the celebrated Kauzmann-entropy crisis in which the extrapolated entropy of a supercooled liquid drops below that of the crystal. We expect that our disordered unique patterns to be of value in fields beyond glass physics, including applications in cryptography as pseudorandom functions with tunable computational complexity.
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi
2017-04-01
The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.
Kaufmann, Tobias; Holz, Elisa M; Kübler, Andrea
2013-01-01
This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the study.
Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials
Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels
2013-01-01
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212
NP-hardness of decoding quantum error-correction codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki
2014-05-21
In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ∼ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.
NASA Astrophysics Data System (ADS)
Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki
2014-05-01
In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ˜ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.
Exact, E = 0, classical and quantum solutions for general power-law oscillators
NASA Technical Reports Server (NTRS)
Nieto, Michael Martin; Daboul, Jamil
1995-01-01
For zero energy, E = 0, we derive exact, classical and quantum solutions for all power-law oscillators with potentials V(r) = -gamma/r(exp nu), gamma greater than 0 and -infinity less than nu less than infinity. When the angular momentum is non-zero, these solutions lead to the classical orbits (p(t) = (cos mu(phi(t) - phi(sub 0)t))(exp 1/mu) with mu = nu/2 - 1 does not equal 0. For nu greater than 2, the orbits are bound and go through the origin. We calculate the periods and precessions of these bound orbits, and graph a number of specific examples. The unbound orbits are also discussed in detail. Quantum mechanically, this system is also exactly solvable. We find that when nu is greater than 2 the solutions are normalizable (bound), as in the classical case. Further, there are normalizable discrete, yet unbound, states. They correspond to unbound classical particles which reach infinity in a finite time. Finally, the number of space dimensions of the system can determine whether or not an E = 0 state is bound. These and other interesting comparisons to the classical system will be discussed.
NASA Astrophysics Data System (ADS)
Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito
2017-10-01
The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
1984-05-01
Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F
NASA Astrophysics Data System (ADS)
García, Isaac A.; Llibre, Jaume; Maza, Susanna
2018-06-01
In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.
Making classical ground-state spin computing fault-tolerant.
Crosson, I J; Bacon, D; Brown, K R
2010-09-01
We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.
Modeling number of claims and prediction of total claim amount
NASA Astrophysics Data System (ADS)
Acar, Aslıhan Şentürk; Karabey, Uǧur
2017-07-01
In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.
Trajectory description of the quantum–classical transition for wave packet interference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw
2016-08-15
The quantum–classical transition for wave packet interference is investigated using a hydrodynamic description. A nonlinear quantum–classical transition equation is obtained by introducing a degree of quantumness ranging from zero to one into the classical time-dependent Schrödinger equation. This equation provides a continuous description for the transition process of physical systems from purely quantum to purely classical regimes. In this study, the transition trajectory formalism is developed to provide a hydrodynamic description for the quantum–classical transition. The flow momentum of transition trajectories is defined by the gradient of the action function in the transition wave function and these trajectories follow themore » main features of the evolving probability density. Then, the transition trajectory formalism is employed to analyze the quantum–classical transition of wave packet interference. For the collision-like wave packet interference where the propagation velocity is faster than the spreading speed of the wave packet, the interference process remains collision-like for all the degree of quantumness. However, the interference features demonstrated by transition trajectories gradually disappear when the degree of quantumness approaches zero. For the diffraction-like wave packet interference, the interference process changes continuously from a diffraction-like to collision-like case when the degree of quantumness gradually decreases. This study provides an insightful trajectory interpretation for the quantum–classical transition of wave packet interference.« less
Cola soft drinks for evaluating the bioaccessibility of uranium in contaminated mine soils.
Lottermoser, Bernd G; Schnug, Ewald; Haneklaus, Silvia
2011-08-15
There is a rising need for scientifically sound and quantitative as well as simple, rapid, cheap and readily available soil testing procedures. The purpose of this study was to explore selected soft drinks (Coca-Cola Classic®, Diet Coke®, Coke Zero®) as indicators of bioaccessible uranium and other trace elements (As, Ce, Cu, La, Mn, Ni, Pb, Th, Y, Zn) in contaminated soils of the Mary Kathleen uranium mine site, Australia. Data of single extraction tests using Coca-Cola Classic®, Diet Coke® and Coke Zero® demonstrate that extractable arsenic, copper, lanthanum, manganese, nickel, yttrium and zinc concentrations correlate significantly with DTPA- and CaCl₂-extractable metals. Moreover, the correlation between DTPA-extractable uranium and that extracted using Coca-Cola Classic® is close to unity (+0.98), with reduced correlations for Diet Coke® (+0.66) and Coke Zero® (+0.55). Also, Coca-Cola Classic® extracts uranium concentrations near identical to DTPA, whereas distinctly higher uranium fractions were extracted using Diet Coke® and Coke Zero®. Results of this study demonstrate that the use of Coca-Cola Classic® in single extraction tests provided an excellent indication of bioaccessible uranium in the analysed soils and of uranium uptake into leaves and stems of the Sodom apple (Calotropis procera). Moreover, the unconventional reagent is superior in terms of availability, costs, preparation and disposal compared to traditional chemicals. Contaminated site assessments and rehabilitation of uranium mine sites require a solid understanding of the chemical speciation of environmentally significant elements for estimating their translocation in soils and plant uptake. Therefore, Cola soft drinks have potential applications in single extraction tests of uranium contaminated soils and may be used for environmental impact assessments of uranium mine sites, nuclear fuel processing plants and waste storage and disposal facilities. Copyright © 2011 Elsevier B.V. All rights reserved.
On a method for generating inequalities for the zeros of certain functions
NASA Astrophysics Data System (ADS)
Gatteschi, Luigi; Giordano, Carla
2007-10-01
In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.
Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series
NASA Astrophysics Data System (ADS)
Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.
2018-03-01
Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.
Quantum repeaters using continuous-variable teleportation
NASA Astrophysics Data System (ADS)
Dias, Josephine; Ralph, T. C.
2017-02-01
Quantum optical states are fragile and can become corrupted when passed through a lossy communication channel. Unlike for classical signals, optical amplifiers cannot be used to recover quantum signals. Quantum repeaters have been proposed as a way of reducing errors and hence increasing the range of quantum communications. Current protocols target specific discrete encodings, for example quantum bits encoded on the polarization of single photons. We introduce a more general approach that can reduce the effect of loss on any quantum optical encoding, including those based on continuous variables such as the field amplitudes. We show that in principle the protocol incurs a resource cost that scales polynomially with distance. We analyze the simplest implementation and find that while its range is limited it can still achieve useful improvements in the distance over which quantum entanglement of field amplitudes can be distributed.
Active tracking system for visible light communication using a GaN-based micro-LED and NRZ-OOK.
Lu, Zhijian; Tian, Pengfei; Chen, Hong; Baranowski, Izak; Fu, Houqiang; Huang, Xuanqi; Montes, Jossue; Fan, Youyou; Wang, Hongyi; Liu, Xiaoyan; Liu, Ran; Zhao, Yuji
2017-07-24
Visible light communication (VLC) holds the promise of a high-speed wireless network for indoor applications and competes with 5G radio frequency (RF) system. Although the breakthrough of gallium nitride (GaN) based micro-light-emitting-diodes (micro-LEDs) increases the -3dB modulation bandwidth exceptionally from tens of MHz to hundreds of MHz, the light collected onto a fast photo receiver drops dramatically, which determines the signal to noise ratio (SNR) of VLC. To fully implement the practical high data-rate VLC link enabled by a GaN-based micro-LED, it requires focusing optics and a tracking system. In this paper, we demonstrate an active on-chip tracking system for VLC using a GaN-based micro-LED and none-return-to-zero on-off keying (NRZ-OOK). Using this novel technique, the field of view (FOV) was enlarged to 120° and data rates up to 600 Mbps at a bit error rate (BER) of 2.1×10 -4 were achieved without manual focusing. This paper demonstrates the establishment of a VLC physical link that shows enhanced communication quality by orders of magnitude, making it optimized for practical communication applications.
Du, Jing; Wang, Jian
2015-11-01
Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Code of Federal Regulations, 2013 CFR
2013-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Code of Federal Regulations, 2012 CFR
2012-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Code of Federal Regulations, 2011 CFR
2011-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris; Bjørn, Brian; Lilja, Beth; Mogensen, Torben
2011-03-01
Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions and characteristics of verbal communication errors such as handover errors and error during teamwork. Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13 (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between units and consults from other specialties, were particularly vulnerable processes. With the risk of bias in mind, it is concluded that more than half of the RCARs described erroneous verbal communication between staff members as root causes of or contributing factors of severe patient safety incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
O'Connell, Timothy; Chang, Debra
2012-01-01
While on call, radiology residents review imaging studies and issue preliminary reports to referring clinicians. In the absence of an integrated reporting system at the training sites of the authors' institution, residents were typing and faxing preliminary reports. To partially automate the on-call resident workflow, a Web-based system for resident reporting was developed by using the free open-source xAMP Web application framework and an open-source DICOM (Digital Imaging and Communications in Medicine) software toolkit, with the goals of reducing errors and lowering barriers to education. This reporting system integrates with the picture archiving and communication system to display a worklist of studies. Patient data are automatically entered in the preliminary report to prevent identification errors and simplify the report creation process. When the final report for a resident's on-call study is available, the reporting system queries the report broker for the final report, and then displays the preliminary report side by side with the final report, thus simplifying the review process and encouraging review of all of the resident's reports. The xAMP Web application framework should be considered for development of radiology department informatics projects owing to its zero cost, minimal hardware requirements, ease of programming, and large support community.
Using CAS to Solve Classical Mathematics Problems
ERIC Educational Resources Information Center
Burke, Maurice J.; Burroughs, Elizabeth A.
2009-01-01
Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…
Classical simulation of quantum error correction in a Fibonacci anyon code
NASA Astrophysics Data System (ADS)
Burton, Simon; Brell, Courtney G.; Flammia, Steven T.
2017-02-01
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.
NASA Astrophysics Data System (ADS)
Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A.
2015-11-01
In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement.
Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A
2015-11-18
In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement.
Dimensional discontinuity in quantum communication complexity at dimension seven
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Pawłowski, Marcin; Żukowski, Marek; Bourennane, Mohamed
2017-02-01
Entanglement-assisted classical communication and transmission of a quantum system are the two quantum resources for information processing. Many information tasks can be performed using either quantum resource. However, this equivalence is not always present since entanglement-assisted classical communication is sometimes known to be the better performing resource. Here, we show not only the opposite phenomenon, that there exist tasks for which transmission of a quantum system is a more powerful resource than entanglement-assisted classical communication, but also that such phenomena can have a surprisingly strong dependence on the dimension of Hilbert space. We introduce a family of communication complexity problems parametrized by the dimension of Hilbert space and study the performance of each quantum resource. Under an additional assumption of a linear strategy for the receiving party, we find that for low dimensions the two resources perform equally well, whereas for dimension seven and above the equivalence is suddenly broken and transmission of a quantum system becomes more powerful than entanglement-assisted classical communication. Moreover, we find that transmission of a quantum system may even outperform classical communication assisted by the stronger-than-quantum correlations obtained from the principle of macroscopic locality.
NASA Astrophysics Data System (ADS)
Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian
2018-03-01
The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.
Communication: Correct charge transfer in CT complexes from the Becke'05 density functional
NASA Astrophysics Data System (ADS)
Becke, Axel D.; Dale, Stephen G.; Johnson, Erin R.
2018-06-01
It has been known for over twenty years that density functionals of the generalized-gradient approximation (GGA) type and exact-exchange-GGA hybrids with low exact-exchange mixing fraction yield enormous errors in the properties of charge-transfer (CT) complexes. Manifestations of this error have also plagued computations of CT excitation energies. GGAs transfer far too much charge in CT complexes. This error has therefore come to be called "delocalization" error. It remains, to this day, a vexing unsolved problem in density-functional theory (DFT). Here we report that a 100% exact-exchange-based density functional known as Becke'05 or "B05" [A. D. Becke, J. Chem. Phys. 119, 2972 (2003); 122, 064101 (2005)] predicts excellent charge transfers in classic CT complexes involving the electron donors NH3, C2H4, HCN, and C2H2 and electron acceptors F2 and Cl2. Our approach is variational, as in our recent "B05min" dipole moments paper [Dale et al., J. Chem. Phys. 147, 154103 (2017)]. Therefore B05 is not only an accurate DFT for thermochemistry but is promising as a solution to the delocalization problem as well.
NASA Astrophysics Data System (ADS)
Xu, Chi; Senaratne, Charutha L.; Culbertson, Robert J.; Kouvetakis, John; Menéndez, José
2017-09-01
The compositional dependence of the lattice parameter in Ge1-ySny alloys has been determined from combined X-ray diffraction and Rutherford Backscattering (RBS) measurements of a large set of epitaxial films with compositions in the 0 < y < 0.14 range. In view of contradictory prior results, a critical analysis of this method has been carried out, with emphasis on nonlinear elasticity corrections and systematic errors in popular RBS simulation codes. The approach followed is validated by showing that measurements of Ge1-xSix films yield a bowing parameter θGeSi =-0.0253(30) Å, in excellent agreement with the classic work by Dismukes. When the same methodology is applied to Ge1-ySny alloy films, it is found that the bowing parameter θGeSn is zero within experimental error, so that the system follows Vegard's law. This is in qualitative agreement with ab initio theory, but the value of the experimental bowing parameter is significantly smaller than the theoretical prediction. Possible reasons for this discrepancy are discussed in detail.
Secret-key-assisted private classical communication capacity over quantum channels
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Luo, Zhicheng; Brun, Todd
2008-10-01
We prove a regularized formula for the secret-key-assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak (e-print arXiv:quant-ph/0512015) on entanglement-assisted quantum communication capacity . This formula provides a family protocol, the private father protocol, under the resource inequality framework that includes private classical communication without secret-key assistance as a child protocol.
Nonperturbative interpretation of the Bloch vector's path beyond the rotating-wave approximation
NASA Astrophysics Data System (ADS)
Benenti, Giuliano; Siccardi, Stefano; Strini, Giuliano
2013-09-01
The Bloch vector's path of a two-level system exposed to a monochromatic field exhibits, in the regime of strong coupling, complex corkscrew trajectories. By considering the infinitesimal evolution of the two-level system when the field is treated as a classical object, we show that the Bloch vector's rotation speed oscillates between zero and twice the rotation speed predicted by the rotating wave approximation. Cusps appear when the rotation speed vanishes. We prove analytically that in correspondence to cusps the curvature of the Bloch vector's path diverges. On the other hand, numerical data show that the curvature is very large even for a quantum field in the deep quantum regime with mean number of photons n¯≲1. We finally compute numerically the typical error size in a quantum gate when the terms beyond rotating wave approximation are neglected.
NASA Technical Reports Server (NTRS)
Allman, Mark; Ostermann, Shawn; Kruse, Hans
1996-01-01
In several experiments using NASA's Advanced Communications Technology Satellite (ACTS), investigators have reported disappointing throughput using the transmission control protocol/Internet protocol (TCP/IP) protocol suite over 1.536Mbit/sec (T1) satellite circuits. A detailed analysis of file transfer protocol (FTP) file transfers reveals that both the TCP window size and the TCP 'slow starter' algorithm contribute to the observed limits in throughput. In this paper we summarize the experimental and and theoretical analysis of the throughput limit imposed by TCP on the satellite circuit. We then discuss in detail the implementation of a multi-socket FTP, XFTP client and server. XFTP has been tested using the ACTS system. Finally, we discuss a preliminary set of tests on a link with non-zero bit error rates. XFTP shows promising performance under these conditions, suggesting the possibility that a multi-socket application may be less effected by bit errors than a single, large-window TCP connection.
Investigation of homodyne demodulation of RZ-BPSK signal based on an optical Costas loop
NASA Astrophysics Data System (ADS)
Zhou, Haijun; Zhu, Zunzhen; Xie, Weilin; Dong, Yi
2018-01-01
We demonstrate the coherent detection of 10 Gb/s return-to-zero (RZ) binary phase-shift keying (BPSK) signal based on a homodyne Costas optical phase-locked loop (OPLL). It demonstrates time misalignment tolerance of +/- 10% of the transmitted RZ-BPSK signal, i.e. -20 to +20 ps between the pulse carver and the phase modulator for 5 Gb/s RZ-BPSK signal, -10 to +10 ps or 10 Gb/s RZ-BPSK signal. Besides, the Costas coherent receiver shows a 2.5 dB sensitivity improvement over conventional 5 Gb/s NRZ-BPSK and a 1.4 dB over 10 Gb/s NRZ-BPSK only at the cost of slightly higher residual phase error. Those merits of sufficient tolerance to misalignment, higher receiver sensitivity, and low residual phase error of RZ-BPSK modulation are beneficial to be applied in free space optical (FSO) communication to achieve higher link budget, longer transmission distance.
Normal forms for Hopf-Zero singularities with nonconservative nonlinear part
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh; Sanders, Jan A.
In this paper we are concerned with the simplest normal form computation of the systems x˙=2xf(x,y2+z2), y˙=z+yf(x,y2+z2), z˙=-y+zf(x,y2+z2), where f is a formal function with real coefficients and without any constant term. These are the classical normal forms of a larger family of systems with Hopf-Zero singularity. Indeed, these are defined such that this family would be a Lie subalgebra for the space of all classical normal form vector fields with Hopf-Zero singularity. The simplest normal forms and simplest orbital normal forms of this family with nonzero quadratic part are computed. We also obtain the simplest parametric normal form of any non-degenerate perturbation of this family within the Lie subalgebra. The symmetry group of the simplest normal forms is also discussed. This is a part of our results in decomposing the normal forms of Hopf-Zero singular systems into systems with a first integral and nonconservative systems.
Solving the patient zero inverse problem by using generalized simulated annealing
NASA Astrophysics Data System (ADS)
Menin, Olavo H.; Bauch, Chris T.
2018-01-01
Identifying patient zero - the initially infected source of a given outbreak - is an important step in epidemiological investigations of both existing and emerging infectious diseases. Here, the use of the Generalized Simulated Annealing algorithm (GSA) to solve the inverse problem of finding the source of an outbreak is studied. The classical disease natural histories susceptible-infected (SI), susceptible-infected-susceptible (SIS), susceptible-infected-recovered (SIR) and susceptible-infected-recovered-susceptible (SIRS) in a regular lattice are addressed. Both the position of patient zero and its time of infection are considered unknown. The algorithm performance with respect to the generalization parameter q˜v and the fraction ρ of infected nodes for whom infection was ascertained is assessed. Numerical experiments show the algorithm is able to retrieve the epidemic source with good accuracy, even when ρ is small, but present no evidence to support that GSA performs better than its classical version. Our results suggest that simulated annealing could be a helpful tool for identifying patient zero in an outbreak where not all cases can be ascertained.
Coding for Communication Channels with Dead-Time Constraints
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2004-01-01
Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.
A comparison of zero-order, first-order, and monod biotransformation models
Bekins, B.A.; Warren, E.; Godsy, E.M.
1998-01-01
Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Authentication Based on Pole-zero Models of Signature Velocity
Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad
2013-01-01
With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797
Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L
2016-03-01
The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37.9% had a direct impact on patient care, with an additional 52.6% having a potential impact. Most communication errors (52.4%) occurred at steps other than result communication, with similar severity of impact.
NASA Astrophysics Data System (ADS)
Roszak, Katarzyna; Cywiński, Łukasz
2018-01-01
We find that when a qubit initialized in a pure state experiences pure dephasing due to interaction with an environment, separable qubit-environment states generated during the evolution also have zero quantum discord with respect to the environment. What follows is that the set of separable states which can be reached during the evolution has zero volume, and hence, such effects as sudden death of qubit-environment entanglement are very unlikely. In the case of the discord with respect to the qubit, a vast majority of qubit-environment separable states is discordant, but in specific situations zero-discord states are possible. This is conceptually important since there is a connection between the discordance with respect to a given subsystem and the possibility of describing the evolution of this subsystem using completely positive maps. Finally, we use the formalism to find an exemplary evolution of an entangled state of two qubits that is completely positive, and occurs solely due to interaction of only one of the qubits with its environment (so one could guess that it corresponds to a local operation, since it is local in a physical sense), but which nevertheless causes the enhancement of entanglement between the qubits. While this simply means that the considered evolution is completely positive, but does not belong to local operations and classical communication, it shows how much caution has to be exercised when identifying evolution channels that belong to that class.
A power-efficient ZF precoding scheme for multi-user indoor visible light communication systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Deng, Lijun; Kang, Bochao
2017-02-01
In this study, we propose a power-efficient ZF precoding scheme for visible light communication (VLC) downlink multi-user multiple-input-single-output (MU-MISO) systems, which incorporates the zero-forcing (ZF) and the characteristics of VLC systems. The main idea of this scheme is that the channel matrix used to perform pseudoinverse comes from the set of optical Access Points (APs) shared by more than one user, instead of the set of all involved serving APs as the existing ZF precoding schemes often used. By doing this, the waste of power, which is caused by the transmission of one user's data in the un-serving APs, can be avoided. In addition, the size of the channel matrix needs to perform pseudoinverse becomes smaller, which helps to reduce the computation complexity. Simulation results in two scenarios show that the proposed ZF precoding scheme has higher power efficiency, better bit error rate (BER) performance and lower computation complexity compared with traditional ZF precoding schemes.
Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
2001-01-01
Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.
NASA Technical Reports Server (NTRS)
Rueda, A.
1985-01-01
That particles may be accelerated by vacuum effects in quantum field theory has been repeatedly proposed in the last few years. A natural upshot of this is a mechanism for cosmic rays (CR) primaries acceleration. A mechanism for acceleration by the zero-point field (ZPE) when the ZPE is taken in a realistic sense (in opposition to a virtual field) was considered. Originally the idea was developed within a semiclassical context. The classical Einstein-Hopf model (EHM) was used to show that free isolated electromagnrtically interacting particles performed a random walk in phase space and more importantly in momentum space when submitted to the perennial action of the so called classical electromagnrtic ZPE.
Two classes of capillary optical fibers: refractive and photonic
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2008-11-01
This paper is a digest tutorial on some properties of capillary optical fibers (COF). Two basic types of capillary optical fibers are clearly distinguished. The classification is based on propagation mechanism of optical wave. The refractive, singlemode COF guides a dark hollow beam of light (DHB) with zero intensity on fiber axis. The photonic, singlemode COF carries nearly a perfect axial Gaussian beam with maximum intensity on fiber axis. A subject of the paper are these two basic kinds of capillary optical fibers of pure refractive and pure photonic mechanism of guided wave transmission. In a real capillary the wave may be transmitted by a mixed mechanism, refractive and photonic, with strong interaction of photonic and refractive guided wave modes. Refractive capillary optical fibers are used widely for photonic instrumentation applications, while photonic capillary optical fibers are considered for trunk optical communications. Replacement of classical, single mode, dispersion shifted, 1550nm optimized optical fibers for communications with photonic capillaries would potentially cause a next serious revolution in optical communications. The predictions say that such a revolution may happen within this decade. This dream is however not fulfilled yet. The paper compares guided modes in both kinds of optical fiber capillaries: refractive and photonic. The differences are emphasized indicating prospective application areas of these fibers.
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Error measuring system of rotary Inductosyn
NASA Astrophysics Data System (ADS)
Liu, Chengjun; Zou, Jibin; Fu, Xinghe
2008-10-01
The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).
Li, Tao; Yuan, Gannan; Li, Wang
2016-03-15
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
Wuchty, Stefan; Uzzi, Brian
2011-01-01
Digital communication data has created opportunities to advance the knowledge of human dynamics in many areas, including national security, behavioral health, and consumerism. While digital data uniquely captures the totality of a person's communication, past research consistently shows that a subset of contacts makes up a person's "social network" of unique resource providers. To address this gap, we analyzed the correspondence between self-reported social network data and email communication data with the objective of identifying the dynamics in e-communication that correlate with a person's perception of a significant network tie. First, we examined the predictive utility of three popular methods to derive social network data from email data based on volume and reciprocity of bilateral email exchanges. Second, we observed differences in the response dynamics along self-reported ties, allowing us to introduce and test a new method that incorporates time-resolved exchange data. Using a range of robustness checks for measurement and misreporting errors in self-report and email data, we find that the methods have similar predictive utility. Although e-communication has lowered communication costs with large numbers of persons, and potentially extended our number of, and reach to contacts, our case results suggest that underlying behavioral patterns indicative of friendship or professional contacts continue to operate in a classical fashion in email interactions.
Wuchty, Stefan; Uzzi, Brian
2011-01-01
Digital communication data has created opportunities to advance the knowledge of human dynamics in many areas, including national security, behavioral health, and consumerism. While digital data uniquely captures the totality of a person's communication, past research consistently shows that a subset of contacts makes up a person's “social network” of unique resource providers. To address this gap, we analyzed the correspondence between self-reported social network data and email communication data with the objective of identifying the dynamics in e-communication that correlate with a person's perception of a significant network tie. First, we examined the predictive utility of three popular methods to derive social network data from email data based on volume and reciprocity of bilateral email exchanges. Second, we observed differences in the response dynamics along self-reported ties, allowing us to introduce and test a new method that incorporates time-resolved exchange data. Using a range of robustness checks for measurement and misreporting errors in self-report and email data, we find that the methods have similar predictive utility. Although e-communication has lowered communication costs with large numbers of persons, and potentially extended our number of, and reach to contacts, our case results suggest that underlying behavioral patterns indicative of friendship or professional contacts continue to operate in a classical fashion in email interactions. PMID:22114665
Differences among Job Positions Related to Communication Errors at Construction Sites
NASA Astrophysics Data System (ADS)
Takahashi, Akiko; Ishida, Toshiro
In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
Mancini, John S; Bowman, Joel M
2013-03-28
We report a global, full-dimensional, ab initio potential energy surface describing the HCl-H2O dimer. The potential is constructed from a permutationally invariant fit, using Morse-like variables, to over 44,000 CCSD(T)-F12b∕aug-cc-pVTZ energies. The surface describes the complex and dissociated monomers with a total RMS fitting error of 24 cm(-1). The normal modes of the minima, low-energy saddle point and separated monomers, the double minimum isomerization pathway and electronic dissociation energy are accurately described by the surface. Rigorous quantum mechanical diffusion Monte Carlo (DMC) calculations are performed to determine the zero-point energy and wavefunction of the complex and the separated fragments. The calculated zero-point energies together with a De value calculated from CCSD(T) with a complete basis set extrapolation gives a D0 value of 1348 ± 3 cm(-1), in good agreement with the recent experimentally reported value of 1334 ± 10 cm(-1) [B. E. Casterline, A. K. Mollner, L. C. Ch'ng, and H. Reisler, J. Phys. Chem. A 114, 9774 (2010)]. Examination of the DMC wavefunction allows for confident characterization of the zero-point geometry to be dominant at the C(2v) double-well saddle point and not the C(s) global minimum. Additional support for the delocalized zero-point geometry is given by numerical solutions to the 1D Schrödinger equation along the imaginary-frequency out-of-plane bending mode, where the zero-point energy is calculated to be 52 cm(-1) above the isomerization barrier. The D0 of the fully deuterated isotopologue is calculated to be 1476 ± 3 cm(-1), which we hope will stand as a benchmark for future experimental work.
Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann
2008-01-01
Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
... purpose is to determine whether you have a refractive error (a need for glasses or contact lenses). For ... glasses or contact lenses) is normal, then the refractive error is zero (plano) and your vision should be ...
Free-space quantum key distribution by rotation-invariant twisted photons.
Vallone, Giuseppe; D'Ambrosio, Vincenzo; Sponselli, Anna; Slussarenko, Sergei; Marrucci, Lorenzo; Sciarrino, Fabio; Villoresi, Paolo
2014-08-08
"Twisted photons" are photons carrying a well-defined nonzero value of orbital angular momentum (OAM). The associated optical wave exhibits a helical shape of the wavefront (hence the name) and an optical vortex at the beam axis. The OAM of light is attracting a growing interest for its potential in photonic applications ranging from particle manipulation, microscopy, and nanotechnologies to fundamental tests of quantum mechanics, classical data multiplexing, and quantum communication. Hitherto, however, all results obtained with optical OAM were limited to laboratory scale. Here, we report the experimental demonstration of a link for free-space quantum communication with OAM operating over a distance of 210 m. Our method exploits OAM in combination with optical polarization to encode the information in rotation-invariant photonic states, so as to guarantee full independence of the communication from the local reference frames of the transmitting and receiving units. In particular, we implement quantum key distribution, a protocol exploiting the features of quantum mechanics to guarantee unconditional security in cryptographic communication, demonstrating error-rate performances that are fully compatible with real-world application requirements. Our results extend previous achievements of OAM-based quantum communication by over 2 orders of magnitude in the link scale, providing an important step forward in achieving the vision of a worldwide quantum network.
Free-Space Quantum Key Distribution by Rotation-Invariant Twisted Photons
NASA Astrophysics Data System (ADS)
Vallone, Giuseppe; D'Ambrosio, Vincenzo; Sponselli, Anna; Slussarenko, Sergei; Marrucci, Lorenzo; Sciarrino, Fabio; Villoresi, Paolo
2014-08-01
"Twisted photons" are photons carrying a well-defined nonzero value of orbital angular momentum (OAM). The associated optical wave exhibits a helical shape of the wavefront (hence the name) and an optical vortex at the beam axis. The OAM of light is attracting a growing interest for its potential in photonic applications ranging from particle manipulation, microscopy, and nanotechnologies to fundamental tests of quantum mechanics, classical data multiplexing, and quantum communication. Hitherto, however, all results obtained with optical OAM were limited to laboratory scale. Here, we report the experimental demonstration of a link for free-space quantum communication with OAM operating over a distance of 210 m. Our method exploits OAM in combination with optical polarization to encode the information in rotation-invariant photonic states, so as to guarantee full independence of the communication from the local reference frames of the transmitting and receiving units. In particular, we implement quantum key distribution, a protocol exploiting the features of quantum mechanics to guarantee unconditional security in cryptographic communication, demonstrating error-rate performances that are fully compatible with real-world application requirements. Our results extend previous achievements of OAM-based quantum communication by over 2 orders of magnitude in the link scale, providing an important step forward in achieving the vision of a worldwide quantum network.
Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A.
2015-01-01
In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N − 1 disjointed users u1, u2, …, uN−1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N − 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N − 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement. PMID:26577473
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
"Simulated molecular evolution" or computer-generated artifacts?
Darius, F; Rojas, R
1994-11-01
1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.
Capacities of quantum amplifier channels
NASA Astrophysics Data System (ADS)
Qi, Haoyu; Wilde, Mark M.
2017-01-01
Quantum amplifier channels are at the core of several physical processes. Not only do they model the optical process of spontaneous parametric down-conversion, but the transformation corresponding to an amplifier channel also describes the physics of the dynamical Casimir effect in superconducting circuits, the Unruh effect, and Hawking radiation. Here we study the communication capabilities of quantum amplifier channels. Invoking a recently established minimum output-entropy theorem for single-mode phase-insensitive Gaussian channels, we determine capacities of quantum-limited amplifier channels in three different scenarios. First, we establish the capacities of quantum-limited amplifier channels for one of the most general communication tasks, characterized by the trade-off between classical communication, quantum communication, and entanglement generation or consumption. Second, we establish capacities of quantum-limited amplifier channels for the trade-off between public classical communication, private classical communication, and secret key generation. Third, we determine the capacity region for a broadcast channel induced by the quantum-limited amplifier channel, and we also show that a fully quantum strategy outperforms those achieved by classical coherent-detection strategies. In all three scenarios, we find that the capacities significantly outperform communication rates achieved with a naive time-sharing strategy.
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
Agogo, George O.
2017-01-01
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599
Unifying distance-based goodness-of-fit indicators for hydrologic model assessment
NASA Astrophysics Data System (ADS)
Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim
2014-05-01
The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.
Kish, Laszlo B; Abbott, Derek; Granqvist, Claes G
2013-01-01
Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law-Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to demonstrate this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR's scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. All our analyses are based on a technically unlimited Eve with infinitely accurate and fast measurements limited only by the laws of physics and statistics. For non-ideal situations and at active (invasive) attacks, the uncertainly principle between measurement duration and statistical errors makes it impossible for Eve to extract the key regardless of the accuracy or speed of her measurements. To show that thermodynamics and noise are essential for the security, we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.
Kish, Laszlo B.; Abbott, Derek; Granqvist, Claes G.
2013-01-01
Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law–Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to demonstrate this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR's scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. All our analyses are based on a technically unlimited Eve with infinitely accurate and fast measurements limited only by the laws of physics and statistics. For non-ideal situations and at active (invasive) attacks, the uncertainly principle between measurement duration and statistical errors makes it impossible for Eve to extract the key regardless of the accuracy or speed of her measurements. To show that thermodynamics and noise are essential for the security, we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged. PMID:24358129
Secure quantum communication using classical correlated channel
NASA Astrophysics Data System (ADS)
Costa, D.; de Almeida, N. G.; Villas-Boas, C. J.
2016-10-01
We propose a secure protocol to send quantum information from one part to another without a quantum channel. In our protocol, which resembles quantum teleportation, a sender (Alice) and a receiver (Bob) share classical correlated states instead of EPR ones, with Alice performing measurements in two different bases and then communicating her results to Bob through a classical channel. Our secure quantum communication protocol requires the same amount of classical bits as the standard quantum teleportation protocol. In our scheme, as in the usual quantum teleportation protocol, once the classical channel is established in a secure way, a spy (Eve) will never be able to recover the information of the unknown quantum state, even if she is aware of Alice's measurement results. Security, advantages, and limitations of our protocol are discussed and compared with the standard quantum teleportation protocol.
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
A toolkit for measurement error correction, with a focus on nutritional epidemiology
Keogh, Ruth H; White, Ian R
2014-01-01
Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385
The Process and Effects of Mass Communication. Revised Edition.
ERIC Educational Resources Information Center
Schramm, Wilbur, Ed.; Roberts, Donald F., Ed.
Composed of a mixture of old classics, new classics, reports on state of the art in important areas, and speculations about the future, this second edition of the reader in communication research provides an introduction to questions about how communication works and what it does. Papers by prominent researchers and writers in the field comprise…
Deurenberg, P; Andreoli, A; de Lorenzo, A
1996-01-01
Total body water and extracellular water were measured by deuterium oxide and bromide dilution respectively in 23 healthy males and 25 healthy females. In addition, total body impedance was measured at 17 frequencies, ranging from 1 kHz to 1350 kHz. Modelling programs were used to extrapolate impedance values to frequency zero (extracellular resistance) and frequency infinity (total body water resistance). Impedance indexes (height2/Zf) were computed at all 17 frequencies. The estimation errors of extracellular resistance and total body water resistance were 1% and 3%, respectively. Impedance and impedance index at low frequency were correlated with extracellular water, independent of the amount of total body water. Total body water showed the greatest correlation with impedance and impedance index at high frequencies. Extrapolated impedance values did not show a higher correlation compared to measured values. Prediction formulas from the literature applied to fixed frequencies showed the best mean and individual predictions for both extracellular water and total body water. It is concluded that, at least in healthy individuals with normal body water distribution, modelling impedance data has no advantage over impedance values measured at fixed frequencies, probably due to estimation errors in the modelled data.
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A.; Wang, Yi; Spincemaille, Pascal
2016-01-01
Purpose Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. Materials and Methods High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Results Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p < 0.001; p < 0.001), which was higher than that of post-zero padded QSM (p < 0.001; p < 0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p = 0.004; p < 0.001). Conclusion Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. PMID:27587225
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping.
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A; Wang, Yi; Spincemaille, Pascal
2017-01-01
Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p<0.001; p<0.001), which was higher than that of post-zero padded QSM (p<0.001; p<0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p=0.004; p<0.001). Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Observable signatures of a classical transition
NASA Astrophysics Data System (ADS)
Johnson, Matthew C.; Lin, Wei
2016-03-01
Eternal inflation arising from a potential landscape predicts that our universe is one realization of many possible cosmological histories. One way to access different cosmological histories is via the nucleation of bubble universes from a metastable false vacuum. Another way to sample different cosmological histories is via classical transitions, the creation of pocket universes through the collision between bubbles. Using relativistic numerical simulations, we examine the possibility of observationally determining if our observable universe resulted from a classical transition. We find that classical transitions produce spatially infinite, approximately open Friedman-Robertson-Walker universes. The leading set of observables in the aftermath of a classical transition are negative spatial curvature and a contribution to the Cosmic Microwave Background temperature quadrupole. The level of curvature and magnitude of the quadrupole are dependent on the position of the observer, and we determine the possible range of observables for two classes of single-scalar field models. For the first class, where the inflationary phase has a lower energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generally falls to zero with distance from the collision while the spatial curvature grows to a constant. For the second class, where the inflationary phase has a higher energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generically falls to zero with distance from the collision while the spatial curvature grows without bound. We find that the magnitude of the quadrupole and curvature grow with increasing centre of mass energy of the collision, and explore variations of the parameters in the scalar field lagrangian.
Observable signatures of a classical transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Matthew C.; Lin, Wei, E-mail: mjohnson@perimeterinstitute.ca, E-mail: lewisweilin@gmail.com
2016-03-01
Eternal inflation arising from a potential landscape predicts that our universe is one realization of many possible cosmological histories. One way to access different cosmological histories is via the nucleation of bubble universes from a metastable false vacuum. Another way to sample different cosmological histories is via classical transitions, the creation of pocket universes through the collision between bubbles. Using relativistic numerical simulations, we examine the possibility of observationally determining if our observable universe resulted from a classical transition. We find that classical transitions produce spatially infinite, approximately open Friedman-Robertson-Walker universes. The leading set of observables in the aftermath ofmore » a classical transition are negative spatial curvature and a contribution to the Cosmic Microwave Background temperature quadrupole. The level of curvature and magnitude of the quadrupole are dependent on the position of the observer, and we determine the possible range of observables for two classes of single-scalar field models. For the first class, where the inflationary phase has a lower energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generally falls to zero with distance from the collision while the spatial curvature grows to a constant. For the second class, where the inflationary phase has a higher energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generically falls to zero with distance from the collision while the spatial curvature grows without bound. We find that the magnitude of the quadrupole and curvature grow with increasing centre of mass energy of the collision, and explore variations of the parameters in the scalar field lagrangian.« less
Analysis of the “naming game” with learning errors in communications
NASA Astrophysics Data System (ADS)
Lou, Yang; Chen, Guanrong
2015-07-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Quantum illumination for enhanced detection of Rayleigh-fading targets
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.
Amplification, Decoherence, and the Acquisition of Information by Spin Environments
Zwolak, Michael; Riedel, C. Jess; Zurek, Wojciech H.
2016-01-01
Quantum Darwinism recognizes the role of the environment as a communication channel: Decoherence can selectively amplify information about the pointer states of a system of interest (preventing access to complementary information about their superpositions) and can make records of this information accessible to many observers. This redundancy explains the emergence of objective, classical reality in our quantum Universe. Here, we demonstrate that the amplification of information in realistic spin environments can be quantified by the quantum Chernoff information, which characterizes the distinguishability of partial records in individual environment subsystems. We show that, except for a set of initial states of measure zero, the environment always acquires redundant information. Moreover, the Chernoff information captures the rich behavior of amplification in both finite and infinite spin environments, from quadratic growth of the redundancy to oscillatory behavior. These results will considerably simplify experimental testing of quantum Darwinism, e.g., using nitrogen vacancies in diamond. PMID:27193389
Amplification, Decoherence, and the Acquisition of Information by Spin Environments
NASA Astrophysics Data System (ADS)
Zwolak, Michael; Riedel, C. Jess; Zurek, Wojciech H.
2016-05-01
Quantum Darwinism recognizes the role of the environment as a communication channel: Decoherence can selectively amplify information about the pointer states of a system of interest (preventing access to complementary information about their superpositions) and can make records of this information accessible to many observers. This redundancy explains the emergence of objective, classical reality in our quantum Universe. Here, we demonstrate that the amplification of information in realistic spin environments can be quantified by the quantum Chernoff information, which characterizes the distinguishability of partial records in individual environment subsystems. We show that, except for a set of initial states of measure zero, the environment always acquires redundant information. Moreover, the Chernoff information captures the rich behavior of amplification in both finite and infinite spin environments, from quadratic growth of the redundancy to oscillatory behavior. These results will considerably simplify experimental testing of quantum Darwinism, e.g., using nitrogen vacancies in diamond.
Crossover Patterning by the Beam-Film Model: Analysis and Implications
Zhang, Liangran; Liang, Zhangyi; Hutchinson, John; Kleckner, Nancy
2014-01-01
Crossing-over is a central feature of meiosis. Meiotic crossover (CO) sites are spatially patterned along chromosomes. CO-designation at one position disfavors subsequent CO-designation(s) nearby, as described by the classical phenomenon of CO interference. If multiple designations occur, COs tend to be evenly spaced. We have previously proposed a mechanical model by which CO patterning could occur. The central feature of a mechanical mechanism is that communication along the chromosomes, as required for CO interference, can occur by redistribution of mechanical stress. Here we further explore the nature of the beam-film model, its ability to quantitatively explain CO patterns in detail in several organisms, and its implications for three important patterning-related phenomena: CO homeostasis, the fact that the level of zero-CO bivalents can be low (the “obligatory CO”), and the occurrence of non-interfering COs. Relationships to other models are discussed. PMID:24497834
Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan
Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less
Quantum money with classical verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavinsky, Dmitry
We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.
Quantum money with classical verification
NASA Astrophysics Data System (ADS)
Gavinsky, Dmitry
2014-12-01
We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.
Broadband Epsilon-Near-Zero (ENZ) and Mu-Near-Zero (MNZ) Active Metamaterial
2011-08-01
Krois Ivan Bonic Aleksandar Kiricenko Eduardo Ugarte Munoz University of Zagreb Faculty of Electrical Engineering and Computing Department...of Wireless Communications Unska 3 Zagreb , Croatia HR 10 000 EOARD GRANT 10-3030 August 2011 Final Report for 24 August 2010 to 24...ADDRESS(ES) University of Zagreb Faculty of Electrical Engineering and Computing Department of Wireless Communications Unska 3 Zagreb , Croatia
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-05-01
This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Chen, Nan-Wei; Shi, Jin-Wei; Tsai, Hsuan-Ju; Wun, Jhih-Min; Kuo, Fong-Ming; Hesler, Jeffery; Crowe, Thomas W; Bowers, John E
2012-09-10
A 25 Gbits/s error-free on-off-keying (OOK) wireless link between an ultra high-speed W-band photonic transmitter-mixer (PTM) and a fast W-band envelope detector is demonstrated. At the transmission end, the high-speed PTM is developed with an active near-ballistic uni-traveling carrier photodiode (NBUTC-PD) integrated with broadband front-end circuitry via the flip-chip bonding technique. Compared to our previous work, the wireless data rate is significantly increased through the improvement on the bandwidth of the front-end circuitry together with the reduction of the intermediate-frequency (IF) driving voltage of the active NBUTC-PD. The demonstrated PTM has a record-wide IF modulation (DC-25 GHz) and optical-to-electrical fractional bandwidths (68-128 GHz, ~67%). At the receiver end, the demodulation is realized with an ultra-fast W-band envelope detector built with a zero-bias Schottky barrier diode with a record wide video bandwidth (37 GHz) and excellent sensitivity. The demonstrated PTM is expected to find applications in multi-gigabit short-range wireless communication.
Coherent-state constellations and polar codes for thermal Gaussian channels
NASA Astrophysics Data System (ADS)
Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.
2017-06-01
Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.
Quantum channels and memory effects
NASA Astrophysics Data System (ADS)
Caruso, Filippo; Giovannetti, Vittorio; Lupo, Cosmo; Mancini, Stefano
2014-10-01
Any physical process can be represented as a quantum channel mapping an initial state to a final state. Hence it can be characterized from the point of view of communication theory, i.e., in terms of its ability to transfer information. Quantum information provides a theoretical framework and the proper mathematical tools to accomplish this. In this context the notion of codes and communication capacities have been introduced by generalizing them from the classical Shannon theory of information transmission and error correction. The underlying assumption of this approach is to consider the channel not as acting on a single system, but on sequences of systems, which, when properly initialized allow one to overcome the noisy effects induced by the physical process under consideration. While most of the work produced so far has been focused on the case in which a given channel transformation acts identically and independently on the various elements of the sequence (memoryless configuration in jargon), correlated error models appear to be a more realistic way to approach the problem. A slightly different, yet conceptually related, notion of correlated errors applies to a single quantum system which evolves continuously in time under the influence of an external disturbance which acts on it in a non-Markovian fashion. This leads to the study of memory effects in quantum channels: a fertile ground where interesting novel phenomena emerge at the intersection of quantum information theory and other branches of physics. A survey is taken of the field of quantum channels theory while also embracing these specific and complex settings.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
The Photon Shell Game and the Quantum von Neumann Architecture with Superconducting Circuits
NASA Astrophysics Data System (ADS)
Mariantoni, Matteo
2012-02-01
Superconducting quantum circuits have made significant advances over the past decade, allowing more complex and integrated circuits that perform with good fidelity. We have recently implemented a machine comprising seven quantum channels, with three superconducting resonators, two phase qubits, and two zeroing registers. I will explain the design and operation of this machine, first showing how a single microwave photon | 1 > can be prepared in one resonator and coherently transferred between the three resonators. I will also show how more exotic states such as double photon states | 2 > and superposition states | 0 >+ | 1 > can be shuffled among the resonators as well [1]. I will then demonstrate how this machine can be used as the quantum-mechanical analog of the von Neumann computer architecture, which for a classical computer comprises a central processing unit and a memory holding both instructions and data. The quantum version comprises a quantum central processing unit (quCPU) that exchanges data with a quantum random-access memory (quRAM) integrated on one chip, with instructions stored on a classical computer. I will also present a proof-of-concept demonstration of a code that involves all seven quantum elements: (1), Preparing an entangled state in the quCPU, (2), writing it to the quRAM, (3), preparing a second state in the quCPU, (4), zeroing it, and, (5), reading out the first state stored in the quRAM [2]. Finally, I will demonstrate that the quantum von Neumann machine provides one unit cell of a two-dimensional qubit-resonator array that can be used for surface code quantum computing. This will allow the realization of a scalable, fault-tolerant quantum processor with the most forgiving error rates to date. [4pt] [1] M. Mariantoni et al., Nature Physics 7, 287-293 (2011.)[0pt] [2] M. Mariantoni et al., Science 334, 61-65 (2011).
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Patterson, Mark E; Pace, Heather A; Fincham, Jack E
2013-09-01
Although error-reporting systems enable hospitals to accurately track safety climate through the identification of adverse events, these systems may be underused within a work climate of poor communication. The objective of this analysis is to identify the extent to which perceived communication climate among hospital pharmacists impacts medical error reporting rates. This cross-sectional study used survey responses from more than 5000 pharmacists responding to the 2010 Hospital Survey on Patient Safety Culture (HSOPSC). Two composite scores were constructed for "communication openness" and "feedback and about error," respectively. Error reporting frequency was defined from the survey question, "In the past 12 months, how many event reports have you filled out and submitted?" Multivariable logistic regressions were used to estimate the likelihood of medical error reporting conditional upon communication openness or feedback levels, controlling for pharmacist years of experience, hospital geographic region, and ownership status. Pharmacists with higher communication openness scores compared with lower scores were 40% more likely to have filed or submitted a medical error report in the past 12 months (OR, 1.4; 95% CI, 1.1-1.7; P = 0.004). In contrast, pharmacists with higher communication feedback scores were not any more likely than those with lower scores to have filed or submitted a medical report in the past 12 months (OR, 1.0; 95% CI, 0.8-1.3; P = 0.97). Hospital work climates that encourage pharmacists to freely communicate about problems related to patient safety is conducive to medical error reporting. The presence of feedback infrastructures about error may not be sufficient to induce error-reporting behavior.
Algebraic Riccati equations in zero-sum differential games
NASA Technical Reports Server (NTRS)
Johnson, T. L.; Chao, A.
1974-01-01
The procedure for finding the closed-loop Nash equilibrium solution of two-player zero-sum linear time-invariant differential games with quadratic performance criteria and classical information pattern may be reduced in most cases to the solution of an algebraic Riccati equation. Based on the results obtained by Willems, necessary and sufficient conditions for existence of solutions to these equations are derived, and explicit conditions for a scalar example are given.
Feedback attitude sliding mode regulation control of spacecraft using arm motion
NASA Astrophysics Data System (ADS)
Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu
2013-09-01
The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.
Do semiclassical zero temperature black holes exist?
Anderson, P R; Hiscock, W A; Taylor, B E
2000-09-18
The semiclassical Einstein equations are solved to first order in epsilon = Planck's over 2pi/M2 for the case of a Reissner-Nordström black hole perturbed by the vacuum stress energy of quantized free fields. Massless and massive fields of spin 0, 1/2, and 1 are considered. We show that in all physically realistic cases, macroscopic zero temperature black hole solutions do not exist. Any static zero temperature semiclassical black hole solutions must then be microscopic and isolated in the space of solutions; they do not join smoothly onto the classical extreme Reissner-Nordström solution as epsilon-->0.
Cooling in reduced period optical lattices: Non-zero Raman detuning
NASA Astrophysics Data System (ADS)
Malinovsky, V. S.; Berman, P. R.
2006-08-01
In a previous paper [Phys. Rev. A 72 (2005) 033415], it was shown that sub-Doppler cooling occurs in a standing-wave Raman scheme (SWRS) that can lead to reduced period optical lattices. These calculations are extended to allow for non-zero detuning of the Raman transitions. New physical phenomena are encountered, including cooling to non-zero velocities, combinations of Sisyphus and "corkscrew" polarization cooling, and somewhat unusual origins of the friction force. The calculations are carried out in a semi-classical approximation and a dressed state picture is introduced to aid in the interpretation of the results.
Adaptive control: Myths and realities
NASA Technical Reports Server (NTRS)
Athans, M.; Valavani, L.
1984-01-01
It was found that all currently existing globally stable adaptive algorithms have three basic properties in common: positive realness of the error equation, square-integrability of the parameter adjustment law and, need for sufficient excitation for asymptotic parameter convergence. Of the three, the first property is of primary importance since it satisfies a sufficient condition for stabillity of the overall system, which is a baseline design objective. The second property has been instrumental in the proof of asymptotic error convergence to zero, while the third addresses the issue of parameter convergence. Positive-real error dynamics can be generated only if the relative degree (excess of poles over zeroes) of the process to be controlled is known exactly; this, in turn, implies perfect modeling. This and other assumptions, such as absence of nonminimum phase plant zeros on which the mathematical arguments are based, do not necessarily reflect properties of real systems. As a result, it is natural to inquire what happens to the designs under less than ideal assumptions. The issues arising from violation of the exact modeling assumption which is extremely restrictive in practice and impacts the most important system property, stability, are discussed.
Indirect learning control for nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Ryu, Yeong Soon; Longman, Richard W.
1993-01-01
In a previous paper, learning control algorithms were developed based on adaptive control ideas for linear time variant systems. The learning control methods were shown to have certain advantages over their adaptive control counterparts, such as the ability to produce zero tracking error in time varying systems, and the ability to eliminate repetitive disturbances. In recent years, certain adaptive control algorithms have been developed for multi-body dynamic systems such as robots, with global guaranteed convergence to zero tracking error for the nonlinear system euations. In this paper we study the relationship between such adaptive control methods designed for this specific class of nonlinear systems, and the learning control problem for such systems, seeking to converge to zero tracking error in following a specific command repeatedly, starting from the same initial conditions each time. The extension of these methods from the adaptive control problem to the learning control problem is seen to be trivial. The advantages and disadvantages of using learning control based on such adaptive control concepts for nonlinear systems, and the use of other currently available learning control algorithms are discussed.
Quantum entanglement percolation
NASA Astrophysics Data System (ADS)
Siomau, Michael
2016-09-01
Quantum communication demands efficient distribution of quantum entanglement across a network of connected partners. The search for efficient strategies for the entanglement distribution may be based on percolation theory, which describes evolution of network connectivity with respect to some network parameters. In this framework, the probability to establish perfect entanglement between two remote partners decays exponentially with the distance between them before the percolation transition point, which unambiguously defines percolation properties of any classical network or lattice. Here we introduce quantum networks created with local operations and classical communication, which exhibit non-classical percolation transition points leading to striking communication advantages over those offered by the corresponding classical networks. We show, in particular, how to establish perfect entanglement between any two nodes in the simplest possible network—the 1D chain—using imperfectly entangled pairs of qubits.
Preventing medication errors in cancer chemotherapy.
Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M
1996-04-01
Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort.
Prediction of final error level in learning and repetitive control
NASA Astrophysics Data System (ADS)
Levoci, Peter A.
Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Qiuying; Guo, Zheng; Sun, Zhiguo; Cui, Xufei; Liu, Kaiyue
2018-01-01
Pedestrian-positioning technology based on the foot-mounted micro inertial measurement unit (MIMU) plays an important role in the field of indoor navigation and has received extensive attention in recent years. However, the positioning accuracy of the inertial-based pedestrian-positioning method is rapidly reduced because of the relatively low measurement accuracy of the measurement sensor. The zero-velocity update (ZUPT) is an error correction method which was proposed to solve the cumulative error because, on a regular basis, the foot is stationary during the ordinary gait; this is intended to reduce the position error growth of the system. However, the traditional ZUPT has poor performance because the time of foot touchdown is short when the pedestrians move faster, which decreases the positioning accuracy. Considering these problems, a forward and reverse calculation method based on the adaptive zero-velocity interval adjustment for the foot-mounted MIMU location method is proposed in this paper. To solve the inaccuracy of the zero-velocity interval detector during fast pedestrian movement where the contact time of the foot on the ground is short, an adaptive zero-velocity interval detection algorithm based on fuzzy logic reasoning is presented in this paper. In addition, to improve the effectiveness of the ZUPT algorithm, forward and reverse multiple solutions are presented. Finally, with the basic principles and derivation process of this method, the MTi-G710 produced by the XSENS company is used to complete the test. The experimental results verify the correctness and applicability of the proposed method. PMID:29883399
Common errors of drug administration in infants: causes and avoidance.
Anderson, B J; Ellis, J F
1999-01-01
Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.
NASA Technical Reports Server (NTRS)
Payne, M. H.
1973-01-01
A computer program is described for the calculation of the zeroes of the associated Legendre functions, Pnm, and their derivatives, for the calculation of the extrema of Pnm and also the integral between pairs of successive zeroes. The program has been run for all n,m from (0,0) to (20,20) and selected cases beyond that for n up to 40. Up to (20,20), the program (written in double precision) retains nearly full accuracy, and indications are that up to (40,40) there is still sufficient precision (4-5 decimal digits for a 54-bit mantissa) for estimation of various bounds and errors involved in geopotential modelling, the purpose for which the program was written.
Sampling command generator corrects for noise and dropouts in recorded data
NASA Technical Reports Server (NTRS)
Anderson, T. O.
1973-01-01
Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark
2012-11-01
To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.
NASA Astrophysics Data System (ADS)
N'Diaye, Mamadou; Choquet, Elodie; Carlotti, Alexis; Pueyo, Laurent; Egron, Sylvain; Leboulleux, Lucie; Levecq, Olivier; Perrin, Marshall D.; Wallace, J. Kent; Long, Chris; Lajoie, Rachel; Lajoie, Charles-Philippe; Eldorado Riggs, A. J.; Zimmerman, Neil T.; Groff, Tyler Dean; Kasdin, N. Jeremy; Vanderbei, Robert J.; Mawet, Dimitri; Macintosh, Bruce; Shaklan, Stuart; Soummer, Remi
2015-01-01
HiCAT is a high-contrast imaging testbed designed to provide complete solutions in wavefront sensing, control and starlight suppression with complex aperture telescopes. Primary mirror segmentation, central obstruction and spiders in the pupil of an on-axis telescope introduces additional diffraction features in the point spread function, which make high-contrast imaging very challenging. The testbed alignment was completed in the summer of 2014, exceeding specifications with a total wavefront error of 12nm rms with a 18mm pupil. Two deformable mirrors are to be installed for wavefront control in the fall of 2014. In this communication, we report on the first testbed results using a classical Lyot coronagraph. We have developed novel coronagraph designs combining an Apodized Pupil Lyot Coronagraph (APLC) with shaped-pupil type optimizations. We present the results of these new APLC-type solutions with two-dimensional shaped-pupil apodizers for the HiCAT geometry. These solutions render the system quasi-insensitive to jitter and low-order aberrations, while improving the performance in terms of inner working angle, bandpass and contrast over a classical APLC.
NASA Astrophysics Data System (ADS)
N'Diaye, Mamadou; Mazoyer, Johan; Choquet, Élodie; Pueyo, Laurent; Perrin, Marshall D.; Egron, Sylvain; Leboulleux, Lucie; Levecq, Olivier; Carlotti, Alexis; Long, Chris A.; Lajoie, Rachel; Soummer, Rémi
2015-09-01
HiCAT is a high-contrast imaging testbed designed to provide complete solutions in wavefront sensing, control and starlight suppression with complex aperture telescopes. The pupil geometry of such observatories includes primary mirror segmentation, central obstruction, and spider vanes, which make the direct imaging of habitable worlds very challenging. The testbed alignment was completed in the summer of 2014, exceeding specifications with a total wavefront error of 12nm rms over a 18mm pupil. The installation of two deformable mirrors for wavefront control is to be completed in the winter of 2015. In this communication, we report on the first testbed results using a classical Lyot coronagraph. We also present the coronagraph design for HiCAT geometry, based on our recent development of Apodized Pupil Lyot Coronagraph (APLC) with shaped-pupil type optimizations. These new APLC-type solutions using two-dimensional shaped-pupil apodizer render the system quasi-insensitive to jitter and low-order aberrations, while improving the performance in terms of inner working angle, bandpass and contrast over a classical APLC.
Error Identification, Labeling, and Correction in Written Business Communication
ERIC Educational Resources Information Center
Quible, Zane K.
2004-01-01
This article used a writing sample that contained 27 sentence-level errors of the type found by corporate America to be annoying and bothersome. Five categories of errors were included in the sample: grammar, punctuation, spelling, writing style, and business communication concepts. Students in a written business communication course were asked…
Planetary Transmission Diagnostics
NASA Technical Reports Server (NTRS)
Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.
2004-01-01
This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.
Simulation: learning from mistakes while building communication and teamwork.
Kuehster, Christina R; Hall, Carla D
2010-01-01
Medical errors are one of the leading causes of death annually in the United States. Many of these errors are related to poor communication and/or lack of teamwork. Using simulation as a teaching modality provides a dual role in helping to reduce these errors. Thorough integration of clinical practice with teamwork and communication in a safe environment increases the likelihood of reducing the error rates in medicine. By allowing practitioners to make potential errors in a safe environment, such as simulation, these valuable lessons improve retention and will rarely be repeated.
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
[Discussion on six errors of formulas corresponding to syndromes in using the classic formulas].
Bao, Yan-ju; Hua, Bao-jin
2012-12-01
The theory of formulas corresponding to syndromes is one of the characteristics of Treatise on Cold Damage and Miscellaneous Diseases (Shanghan Zabing Lun) and one of the main principles in applying classic prescriptions. It is important to take effect by following the principle of formulas corresponding to syndromes. However, some medical practitioners always feel that the actual clinical effect is far less than expected. Six errors in the use of classic prescriptions as well as the theory of formulas corresponding to syndromes are the most important causes to be considered, i.e. paying attention only to the local syndromes while neglecting the whole, paying attention only to formulas corresponding to syndromes while neglecting the pathogenesis, paying attention only to syndromes while neglecting the pulse diagnosis, paying attention only to unilateral prescription but neglecting the combined prescriptions, paying attention only to classic prescriptions while neglecting the modern formulas, and paying attention only to the formulas but neglecting the drug dosage. Therefore, not only the patients' clinical syndromes, but also the combination of main syndrome and pathogenesis simultaneously is necessary in the clinical applications of classic prescriptions and the theory of prescription corresponding to syndrome. In addition, comprehensive syndrome differentiation, modern formulas, current prescriptions, combined prescriptions, and drug dosage all contribute to avoid clinical errors and improve clinical effects.
Quantum friction on monoatomic layers and its classical analog
NASA Astrophysics Data System (ADS)
Maslovski, Stanislav I.; Silveirinha, Mário G.
2013-07-01
We consider the effect of quantum friction at zero absolute temperature resulting from polaritonic interactions in closely positioned two-dimensional arrays of polarizable atoms (e.g., graphene sheets) or thin dielectric sheets modeled as such arrays. The arrays move one with respect to another with a nonrelativistic velocity v≪c. We confirm that quantum friction is inevitably related to material dispersion, and that such friction vanishes in nondispersive media. In addition, we consider a classical analog of the quantum friction which allows us to establish a link between the phenomena of quantum friction and classical parametric generation. In particular, we demonstrate how the quasiparticle generation rate typically obtained from the quantum Fermi golden rule can be calculated classically.
Rota-Baxter operators on sl (2,C) and solutions of the classical Yang-Baxter equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Jun, E-mail: peitsun@163.com; Bai, Chengming, E-mail: baicm@nankai.edu.cn; Guo, Li, E-mail: liguo@rutgers.edu
2014-02-15
We explicitly determine all Rota-Baxter operators (of weight zero) on sl (2,C) under the Cartan-Weyl basis. For the skew-symmetric operators, we give the corresponding skew-symmetric solutions of the classical Yang-Baxter equation in sl (2,C), confirming the related study by Semenov-Tian-Shansky. In general, these Rota-Baxter operators give a family of solutions of the classical Yang-Baxter equation in the six-dimensional Lie algebra sl (2,C)⋉{sub ad{sup *}} sl (2,C){sup *}. They also give rise to three-dimensional pre-Lie algebras which in turn yield solutions of the classical Yang-Baxter equation in other six-dimensional Lie algebras.
Bosco, Francesca M; Angeleri, Romina; Sacco, Katiuscia; Bara, Bruno G
2015-01-01
The purpose of this study is to investigate the pragmatic abilities of individuals with traumatic brain injury (TBI). Several studies in the literature have previously reported communicative deficits in individuals with TBI, however such research has focused principally on communicative deficits in general, without providing an analysis of the errors committed in understanding and expressing communicative acts. Within the theoretical framework of Cognitive Pragmatics theory and Cooperative principle we focused on intermediate communicative errors that occur in both the comprehension and the production of various pragmatic phenomena, expressed through both linguistic and extralinguistic communicative modalities. To investigate the pragmatic abilities of individuals with TBI. A group of 30 individuals with TBI and a matched control group took part in the experiment. They were presented with a series of videotaped vignettes depicting everyday communicative exchanges, and were tested on the comprehension and production of various kinds of communicative acts (standard communicative act, deceit and irony). The participants' answers were evaluated as correct or incorrect. Incorrect answers were then further evaluated with regard to the presence of different intermediate errors. Individuals with TBI performed worse than control participants on all the tasks investigated when considering correct versus incorrect answers. Furthermore, a series of logistic regression analyses showed that group membership (TBI versus controls) significantly predicted the occurrence of intermediate errors. This result holds in both the comprehension and production tasks, and in both linguistic and extralinguistic modalities. Participants with TBI tend to have difficulty in managing different types of communicative acts, and they make more intermediate errors than the control participants. Intermediate errors concern the comprehension and production of the expression act, the comprehension of the actors' meaning, as well as the respect of the Cooperative principle. © 2014 Royal College of Speech and Language Therapists.
ERIC Educational Resources Information Center
Huprich, Julia; Green, Ravonne
2007-01-01
The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Recognizing and managing errors of cognitive underspecification.
Duthie, Elizabeth A
2014-03-01
James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.
Stochastic Evolution Equations Driven by Fractional Noises
2016-11-28
rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian
The Birth and Death of Redundancy in Decoherence and Quantum Darwinism
NASA Astrophysics Data System (ADS)
Riedel, Charles; Zurek, Wojciech; Zwolak, Michael
2012-02-01
Understanding the quantum-classical transition and the identification of a preferred classical domain through quantum Darwinism is based on recognizing high-redundancy states as both ubiquitous and exceptional. They are produced ubiquitously during decoherence, as has been demonstrated by the recent identification of very general conditions under which high-redundancy states develop. They are exceptional in that high-redundancy states occupy a very narrow corner of the global Hilbert space; states selected at random are overwelming likely to exhibit zero redundancy. In this letter, we examine the conditions and time scales for the transition from high-redundancy states to zero-redundancy states in many-body dynamics. We identify sufficient condition for the development of redundancy from product states and show that the destruction of redundancy can be accomplished even with highly constrained interactions.
Photogrammetric mobile satellite service prediction
NASA Technical Reports Server (NTRS)
Akturan, Riza; Vogel, Wolfhard J.
1994-01-01
Photographic images of the sky were taken with a camera through a fisheye lens with a 180 deg field-of-view. The images of rural, suburban, and urban scenes were analyzed on a computer to derive quantitative information about the elevation angles at which the sky becomes visible. Such knowledge is needed by designers of mobile and personal satellite communications systems and is desired by customers of these systems. The 90th percentile elevation angle of the skyline was found to be 10 deg, 17 deg, and 51 deg in the three environments. At 8 deg, 75 percent, 75 percent, and 35 percent of the sky was visible, respectively. The elevation autocorrelation fell to zero with a 72 deg lag in the rural and urban environment and a 40 deg lag in the suburb. Mean estimation errors are below 4 deg.
Quantum communication complexity advantage implies violation of a Bell inequality
Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii
2016-01-01
We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600
Structural zeros in high-dimensional data with applications to microbiome studies.
Kaul, Abhishek; Davidov, Ori; Peddada, Shyamal D
2017-07-01
This paper is motivated by the recent interest in the analysis of high-dimensional microbiome data. A key feature of these data is the presence of "structural zeros" which are microbes missing from an observation vector due to an underlying biological process and not due to error in measurement. Typical notions of missingness are unable to model these structural zeros. We define a general framework which allows for structural zeros in the model and propose methods of estimating sparse high-dimensional covariance and precision matrices under this setup. We establish error bounds in the spectral and Frobenius norms for the proposed estimators and empirically verify them with a simulation study. The proposed methodology is illustrated by applying it to the global gut microbiome data of Yatsunenko and others (2012. Human gut microbiome viewed across age and geography. Nature 486, 222-227). Using our methodology we classify subjects according to the geographical location on the basis of their gut microbiome. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Aspects of Geodesical Motion with Fisher-Rao Metric: Classical and Quantum
NASA Astrophysics Data System (ADS)
Ciaglia, Florio M.; Cosmo, Fabio Di; Felice, Domenico; Mancini, Stefano; Marmo, Giuseppe; Pérez-Pardo, Juan M.
The purpose of this paper is to exploit the geometric structure of quantum mechanics and of statistical manifolds to study the qualitative effect that the quantum properties have in the statistical description of a system. We show that the end points of geodesics in the classical setting coincide with the probability distributions that minimise Shannon’s entropy, i.e. with distributions of zero dispersion. In the quantum setting this happens only for particular initial conditions, which in turn correspond to classical submanifolds. This result can be interpreted as a geometric manifestation of the uncertainty principle.
Quantum versus classical hyperfine-induced dynamics in a quantum dota)
NASA Astrophysics Data System (ADS)
Coish, W. A.; Loss, Daniel; Yuzbashyan, E. A.; Altshuler, B. L.
2007-04-01
In this article we analyze spin dynamics for electrons confined to semiconductor quantum dots due to the contact hyperfine interaction. We compare mean-field (classical) evolution of an electron spin in the presence of a nuclear field with the exact quantum evolution for the special case of uniform hyperfine coupling constants. We find that (in this special case) the zero-magnetic-field dynamics due to the mean-field approximation and quantum evolution are similar. However, in a finite magnetic field, the quantum and classical solutions agree only up to a certain time scale t <τc, after which they differ markedly.
Modelling wetting and drying effects over complex topography
NASA Astrophysics Data System (ADS)
Tchamen, G. W.; Kahawita, R. A.
1998-06-01
The numerical simulation of free surface flows that alternately flood and dry out over complex topography is a formidable task. The model equation set generally used for this purpose is the two-dimensional (2D) shallow water wave model (SWWM). Simplified forms of this system such as the zero inertia model (ZIM) can accommodate specific situations like slowly evolving floods over gentle slopes. Classical numerical techniques, such as finite differences (FD) and finite elements (FE), have been used for their integration over the last 20-30 years. Most of these schemes experience some kind of instability and usually fail when some particular domain under specific flow conditions is treated. The numerical instability generally manifests itself in the form of an unphysical negative depth that subsequently causes a run-time error at the computation of the celerity and/or the friction slope. The origins of this behaviour are diverse and may be generally attributed to:1. The use of a scheme that is inappropriate for such complex flow conditions (mixed regimes).2. Improper treatment of a friction source term or a large local curvature in topography.3. Mishandling of a cell that is partially wet/dry.In this paper, a tentative attempt has been made to gain a better understanding of the genesis of the instabilities, their implications and the limits to the proposed solutions. Frequently, the enforcement of robustness is made at the expense of accuracy. The need for a positive scheme, that is, a scheme that always predicts positive depths when run within the constraints of some practical stability limits, is fundamental. It is shown here how a carefully chosen scheme (in this case, an adaptation of the solver to the SWWM) can preserve positive values of water depth under both explicit and implicit time integration, high velocities and complex topography that may include dry areas. However, the treatment of the source terms: friction, Coriolis and particularly the bathymetry, are also of prime importance and must not be overlooked. Linearization with a combination of switching between explicit-implicit integration can overcome the stiffness of the friction and Coriolis terms and provide stable numerical integration. The treatment of the bathymetry source term is much more delicate. For cells undergoing a transient wet-dry process, the imposition of zero velocity stabilizes most of the approximations. However, this artificial zero velocity condition can be the cause of considerable error, especially when fast moving fronts are involved. Besides these difficulties linked with the internal position of the front within a cell versus the limited resolution of a numerical grid, it appears that the second derivative that defines whether the bed is locally convex or concave is a key indicator for stability. A convex bottom may lead to unbounded solutions. It appears that this behaviour is not linked to the numerics (numerical scheme) but rather to the mathematical theory of the SWWM. These concerns about stability have taken precedence, until now, over the crucial and related question of accuracy, especially near a moving front, and how these possible inaccuracies at the leading edge may affect the solution at interior points within the domain.This paper presents an in depth, fully two-dimensional space analysis of the aforementioned problem that has not been addressed before. The purpose of the present communication is not to propose what could be viewed as a final solution, but rather to provide some key considerations that may reveal the ingredients and insight necessary for the development of accurate and robust solutions in the future.
Transfer Error and Correction Approach in Mobile Network
NASA Astrophysics Data System (ADS)
Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou
With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.
NASA Astrophysics Data System (ADS)
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
NASA Astrophysics Data System (ADS)
Gao, Zhi-yu; Kang, Yu; Li, Yan-shuai; Meng, Chao; Pan, Tao
2018-04-01
Elevated-temperature flow behavior of a novel Ni-Cr-Mo-B ultra-heavy-plate steel was investigated by conducting hot compressive deformation tests on a Gleeble-3800 thermo-mechanical simulator at a temperature range of 1123 K–1423 K with a strain rate range from 0.01 s‑1 to10 s‑1 and a height reduction of 70%. Based on the experimental results, classic strain-compensated Arrhenius-type, a new revised strain-compensated Arrhenius-type and classic modified Johnson-Cook constitutive models were developed for predicting the high-temperature deformation behavior of the steel. The predictability of these models were comparatively evaluated in terms of statistical parameters including correlation coefficient (R), average absolute relative error (AARE), average root mean square error (RMSE), normalized mean bias error (NMBE) and relative error. The statistical results indicate that the new revised strain-compensated Arrhenius-type model could give prediction of elevated-temperature flow stress for the steel accurately under the entire process conditions. However, the predicted values by the classic modified Johnson-Cook model could not agree well with the experimental values, and the classic strain-compensated Arrhenius-type model could track the deformation behavior more accurately compared with the modified Johnson-Cook model, but less accurately with the new revised strain-compensated Arrhenius-type model. In addition, reasons of differences in predictability of these models were discussed in detail.
The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory. Revision.
1985-06-10
The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse...Eigensolutions in Nonlinear Inverse Cavity-Flow Theory [Revised] Abstract: The method of Levi Civita is applied to an isolated fully cavitating body at...problem is not thought * to present much of a challenge at zero cavitation number. In this case, - the classical method of Levi Civita [7] can be
Absolute metrology for space interferometers
NASA Astrophysics Data System (ADS)
Salvadé, Yves; Courteville, Alain; Dändliker, René
2017-11-01
The crucial issue of space-based interferometers is the laser interferometric metrology systems to monitor with very high accuracy optical path differences. Although classical high-resolution laser interferometers using a single wavelength are well developed, this type of incremental interferometer has a severe drawback: any interruption of the interferometer signal results in the loss of the zero reference, which requires a new calibration, starting at zero optical path difference. We propose in this paper an absolute metrology system based on multiplewavelength interferometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinson, Jake L.; Kathmann, Shawn M.; Ford, Ian J.
2014-01-14
The nucleation of particles from trace gases in the atmosphere is an important source of cloud condensation nuclei (CCN), and these are vital for the formation of clouds in view of the high supersaturations required for homogeneous water droplet nucleation. The methods of quantum chemistry have increasingly been employed to model nucleation due to their high accuracy and efficiency in calculating configurational energies; and nucleation rates can be obtained from the associated free energies of particle formation. However, even in such advanced approaches, it is typically assumed that the nuclei have a classical nature, which is questionable for some systems.more » The importance of zero-point motion (also known as quantum nuclear dynamics) in modelling small clusters of sulphuric acid and water is tested here using the path integral molecular dynamics (PIMD) method at the density functional theory (DFT) level of theory. We observe a small zero-point effect on the the equilibrium structures of certain clusters. One configuration is found to display a bimodal behaviour at 300 K in contrast to the stable ionised state suggested from a zero temperature classical geometry optimisation. The general effect of zero-point motion is to promote the extent of proton transfer with respect to classical behaviour. We thank Prof. Angelos Michaelides and his group in University College London (UCL) for practical advice and helpful discussions. This work benefited from interactions with the Thomas Young Centre through seminar and discussions involving the PIMD method. SMK was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. JLS and IJF were supported by the IMPACT scheme at UCL and by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. We are grateful for use of the UCL Legion High Performance Computing Facility and the resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy, Office of Science of the under Contract No. DE-AC02-05CH11231.« less
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Digital Filters for Digital Phase-locked Loops
NASA Technical Reports Server (NTRS)
Simon, M.; Mileant, A.
1985-01-01
An s/z hybrid model for a general phase locked loop is proposed. The impact of the loop filter on the stability, gain margin, noise equivalent bandwidth, steady state error and time response is investigated. A specific digital filter is selected which maximizes the overall gain margin of the loop. This filter can have any desired number of integrators. Three integrators are sufficient in order to track a phase jerk with zero steady state error at loop update instants. This filter has one zero near z = 1.0 for each integrator. The total number of poles of the filter is equal to the number of integrators plus two.
The Preliminary Development of a Robotic Laser System Used for Ophthalmic Surgery
1988-01-01
proposed design, there is not sufficient computer time to ensure a zero probability of * error. But, what’s more important than a zero probability of...even zero proved to shorten the computation time. 4.3.6 The User Interface To put things in perspective, the step by step procedure for using the routine...was measured from the identified slice. The sectional area was measured using a Summa- graphic digitizing pad and the Sigma-scan program from Jandel
Noise management to achieve superiority in quantum information systems
NASA Astrophysics Data System (ADS)
Nemoto, Kae; Devitt, Simon; Munro, William J.
2017-06-01
Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority. This article is part of the themed issue 'Quantum technology for the 21st century'.
Noise management to achieve superiority in quantum information systems.
Nemoto, Kae; Devitt, Simon; Munro, William J
2017-08-06
Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority.This article is part of the themed issue 'Quantum technology for the 21st century'. © 2017 The Author(s).
ERIC Educational Resources Information Center
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Skinner, Stan; Holdefer, Robert; McAuliffe, John J; Sala, Francesco
2017-11-01
Error avoidance in medicine follows similar rules that apply within the design and operation of other complex systems. The error-reduction concepts that best fit the conduct of testing during intraoperative neuromonitoring are forgiving design (reversibility of signal loss to avoid/prevent injury) and system redundancy (reduction of false reports by the multiplication of the error rate of tests independently assessing the same structure). However, error reduction in intraoperative neuromonitoring is complicated by the dichotomous roles (and biases) of the neurophysiologist (test recording and interpretation) and surgeon (intervention). This "interventional cascade" can be given as follows: test → interpretation → communication → intervention → outcome. Observational and controlled trials within operating rooms demonstrate that optimized communication, collaboration, and situational awareness result in fewer errors. Well-functioning operating room collaboration depends on familiarity and trust among colleagues. Checklists represent one method to initially enhance communication and avoid obvious errors. All intraoperative neuromonitoring supervisors should strive to use sufficient means to secure situational awareness and trusted communication/collaboration. Face-to-face audiovisual teleconnections may help repair deficiencies when a particular practice model disallows personal operating room availability. All supervising intraoperative neurophysiologists need to reject an insular or deferential or distant mindset.
Bell, Sigall K; White, Andrew A; Yi, Jean C; Yi-Frazier, Joyce P; Gallagher, Thomas H
2017-12-01
Transparent communication after medical error includes disclosing the mistake to the patient, discussing the event with colleagues, and reporting to the institution. Little is known about whether attitudes about these transparency practices are related. Understanding these relationships could inform educational and organizational strategies to promote transparency. We analyzed responses of 3038 US and Canadian physicians to a medical error communication survey. We used bivariate correlations, principal components analysis, and linear regression to determine whether and how physician attitudes about transparent communication with patients, peers, and the institution after error were related. Physician attitudes about disclosing errors to patients, peers, and institutions were correlated (all P's < 0.001) and represented 2 principal components analysis factors, namely, communication with patients and communication with peers/institution. Predictors of attitudes supporting transparent communication with patients and peers/institution included female sex, US (vs Canadian) doctors, academic (vs private) practice, the belief that disclosure decreased likelihood of litigation, and the belief that system changes occur after error reporting. In addition, younger physicians, surgeons, and those with previous experience disclosing a serious error were more likely to agree with disclosure to patients. In comparison, doctors who believed that disclosure would decrease patient trust were less likely to agree with error disclosure to patients. Previous disclosure education was associated with attitudes supporting greater transparency with peers/institution. Physician attitudes about discussing errors with patients, colleagues, and institutions are related. Several predictors of transparency affect all 3 practices and are potentially modifiable by educational and institutional strategies.
Decoherence can relax cosmic acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markkanen, Tommi
In this work we investigate the semi-classical backreaction for a quantised conformal scalar field and classical vacuum energy. In contrast to the usual approximation of a closed system, our analysis includes an environmental sector such that a quantum-to-classical transition can take place. We show that when the system decoheres into a mixed state with particle number as the classical observable de Sitter space is destabilized, which is observable as a gradually decreasing Hubble rate. In particular we show that at late times this mechanism can drive the curvature of the Universe to zero and has an interpretation as the decaymore » of the vacuum energy demonstrating that quantum effects can be relevant for the fate of the Universe.« less
A Simple Example of ``Quantum Darwinism'': Redundant Information Storage in Many-Spin Environments
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin; Zurek, Wojciech H.
2005-11-01
As quantum information science approaches the goal of constructing quantum computers, understanding loss of information through decoherence becomes increasingly important. The information about a system that can be obtained from its environment can facilitate quantum control and error correction. Moreover, observers gain most of their information indirectly, by monitoring (primarily photon) environments of the "objects of interest." Exactly how this information is inscribed in the environment is essential for the emergence of "the classical" from the quantum substrate. In this paper, we examine how many-qubit (or many-spin) environments can store information about a single system. The information lost to the environment can be stored redundantly, or it can be encoded in entangled modes of the environment. We go on to show that randomly chosen states of the environment almost always encode the information so that an observer must capture a majority of the environment to deduce the system's state. Conversely, in the states produced by a typical decoherence process, information about a particular observable of the system is stored redundantly. This selective proliferation of "the fittest information" (known as Quantum Darwinism) plays a key role in choosing the preferred, effectively classical observables of macroscopic systems. The developing appreciation that the environment functions not just as a garbage dump, but as a communication channel, is extending our understanding of the environment's role in the quantum-classical transition beyond the traditional paradigm of decoherence.
Randmaa, Maria; Mårtensson, Gunilla; Leo Swenne, Christine; Engström, Maria
2014-01-21
We aimed to examine staff members' perceptions of communication within and between different professions, safety attitudes and psychological empowerment, prior to and after implementation of the communication tool Situation-Background-Assessment-Recommendation (SBAR) at an anaesthetic clinic. The aim was also to study whether there was any change in the proportion of incident reports caused by communication errors. A prospective intervention study with comparison group using preassessments and postassessments. Questionnaire data were collected from staff in an intervention (n=100) and a comparison group (n=69) at the anaesthetic clinic in two hospitals prior to (2011) and after (2012) implementation of SBAR. The proportion of incident reports due to communication errors was calculated during a 1-year period prior to and after implementation. Anaesthetic clinics at two hospitals in Sweden. All licensed practical nurses, registered nurses and physicians working in the operating theatres, intensive care units and postanaesthesia care units at anaesthetic clinics in two hospitals were invited to participate. Implementation of SBAR in an anaesthetic clinic. The primary outcomes were staff members' perception of communication within and between different professions, as well as their perceptions of safety attitudes. Secondary outcomes were psychological empowerment and incident reports due to error of communication. In the intervention group, there were statistically significant improvements in the factors 'Between-group communication accuracy' (p=0.039) and 'Safety climate' (p=0.011). The proportion of incident reports due to communication errors decreased significantly (p<0.0001) in the intervention group, from 31% to 11%. Implementing the communication tool SBAR in anaesthetic clinics was associated with improvement in staff members' perception of communication between professionals and their perception of the safety climate as well as with a decreased proportion of incident reports related to communication errors. ISRCTN37251313.
Zero-point energy effects in anion solvation shells.
Habershon, Scott
2014-05-21
By comparing classical and quantum-mechanical (path-integral-based) molecular simulations of solvated halide anions X(-) [X = F, Cl, Br and I], we identify an ion-specific quantum contribution to anion-water hydrogen-bond dynamics; this effect has not been identified in previous simulation studies. For anions such as fluoride, which strongly bind water molecules in the first solvation shell, quantum simulations exhibit hydrogen-bond dynamics nearly 40% faster than the corresponding classical results, whereas those anions which form a weakly bound solvation shell, such as iodide, exhibit a quantum effect of around 10%. This observation can be rationalized by considering the different zero-point energy (ZPE) of the water vibrational modes in the first solvation shell; for strongly binding anions, the ZPE of bound water molecules is larger, giving rise to faster dynamics in quantum simulations. These results are consistent with experimental investigations of anion-bound water vibrational and reorientational motion.
The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory.
1983-01-25
ere, side if necessary and id.ntify hv hlock number) " The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation... Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse problem in which one...case, the classical method of Levi Civita [71 can be applied to an isolated •Numbers in square brackets indicate citations in the references listed below
Luigi Gatteschi's work on asymptotics of special functions and their zeros
NASA Astrophysics Data System (ADS)
Gautschi, Walter; Giordano, Carla
2008-12-01
A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.
Device-Independent Tests of Entropy
NASA Astrophysics Data System (ADS)
Chaves, Rafael; Brask, Jonatan Bohr; Brunner, Nicolas
2015-09-01
We show that the entropy of a message can be tested in a device-independent way. Specifically, we consider a prepare-and-measure scenario with classical or quantum communication, and develop two different methods for placing lower bounds on the communication entropy, given observable data. The first method is based on the framework of causal inference networks. The second technique, based on convex optimization, shows that quantum communication provides an advantage over classical communication, in the sense of requiring a lower entropy to reproduce given data. These ideas may serve as a basis for novel applications in device-independent quantum information processing.
Revisiting Deng et al.'s Multiparty Quantum Secret Sharing Protocol
NASA Astrophysics Data System (ADS)
Hwang, Tzonelih; Hwang, Cheng-Chieh; Yang, Chun-Wei; Li, Chuan-Ming
2011-09-01
The multiparty quantum secret sharing protocol [Deng et al. in Chin. Phys. Lett. 23: 1084-1087, 2006] is revisited in this study. It is found that the performance of Deng et al.'s protocol can be much improved by using the techniques of block-transmission and decoy single photons. As a result, the qubit efficiency is improved 2.4 times and only one classical communication, a public discussion, and two quantum communications between each agent and the secret holder are needed rather than n classical communications, n public discussions, and 3n/2 quantum communications required in the original scheme.
Empirical Analysis of Systematic Communication Errors.
1981-09-01
human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share
In-hospital fellow coverage reduces communication errors in the surgical intensive care unit.
Williams, Mallory; Alban, Rodrigo F; Hardy, James P; Oxman, David A; Garcia, Edward R; Hevelone, Nathanael; Frendl, Gyorgy; Rogers, Selwyn O
2014-06-01
Staff coverage strategies of intensive care units (ICUs) impact clinical outcomes. High-intensity staff coverage strategies are associated with lower morbidity and mortality. Accessible clinical expertise, team work, and effective communication have all been attributed to the success of this coverage strategy. We evaluate the impact of in-hospital fellow coverage (IHFC) on improving communication of cardiorespiratory events. A prospective observational study performed in an academic tertiary care center with high-intensity staff coverage. The main outcome measure was resident to fellow communication of cardiorespiratory events during IHFC vs home coverage (HC) periods. Three hundred twelve cardiorespiratory events were collected in 114 surgical ICU patients in 134 study days. Complete data were available for 306 events. One hundred three communication errors occurred. IHFC was associated with significantly better communication of events compared to HC (P<.0001). Residents communicated 89% of events during IHFC vs 51% of events during HC (P<.001). Communication patterns of junior and midlevel residents were similar. Midlevel residents communicated 68% of all on-call events (87% IHFC vs 50% HC, P<.001). Junior residents communicated 66% of events (94% IHFC vs 52% HC, P<.001). Communication errors were lower in all ICUs during IHFC (P<.001). IHFC reduced communication errors. Copyright © 2014 Elsevier Inc. All rights reserved.
Sampling Error in a Particulate Mixture: An Analytical Chemistry Experiment.
ERIC Educational Resources Information Center
Kratochvil, Byron
1980-01-01
Presents an undergraduate experiment demonstrating sampling error. Selected as the sampling system is a mixture of potassium hydrogen phthalate and sucrose; using a self-zeroing, automatically refillable buret to minimize titration time of multiple samples and employing a dilute back-titrant to obtain high end-point precision. (CS)
On axiomatizations of the Shapley value for bi-cooperative games
NASA Astrophysics Data System (ADS)
Meirong, Wu; Shaochen, Cao; Huazhen, Zhu
2016-06-01
There are three decisions available for each participant in bi-cooperative games which can depict real life accurately in this paper. This paper researches the Shapley value of bi-cooperative games and completes the unique characterization. The axiom similar to classical cooperative games which could be used to characterize the Shapley value of bi-cooperative games as well. Meanwhile, it introduces a structural axiom and a zero excluded axiom instead of effective axiom in classical cooperative games.
ERIC Educational Resources Information Center
Gotay, Jose Antonio
2013-01-01
This qualitative, classical Delphi study served to explore the apparent lack of corporate commitment to prioritized Green Information Communication Technologies (ICTs), which could delay the economic and social benefits for maximizing the use of natural energy resources in a weak economy. The purpose of this study was to examine the leadership…
Communication cost of simulating Bell correlations.
Toner, B F; Bacon, D
2003-10-31
What classical resources are required to simulate quantum correlations? For the simplest and most important case of local projective measurements on an entangled Bell pair state, we show that exact simulation is possible using local hidden variables augmented by just one bit of classical communication. Certain quantum teleportation experiments, which teleport a single qubit, therefore admit a local hidden variables model.
NASA Astrophysics Data System (ADS)
Tan, Xiaoqing; Zhang, Xiaoqian
2016-05-01
We propose two controlled quantum secure communication schemes by entanglement distillation or generalized measurement. The sender Alice, the receiver Bob and the controllers David and Cliff take part in the whole schemes. The supervisors David and Cliff can control the information transmitted from Alice to Bob by adjusting the local measurement angles θ _4 and θ _3. Bob can verify his secret information by classical one-way function after communication. The average amount of information is analyzed and compared for these two methods by MATLAB. The generalized measurement is a better scheme. Our schemes are secure against some well-known attacks because classical encryption and decoy states are used to ensure the security of the classical channel and the quantum channel.
Paul, Amit K; Hase, William L
2016-01-28
A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.
Gossip algorithms in quantum networks
NASA Astrophysics Data System (ADS)
Siomau, Michael
2017-01-01
Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
High-speed noise-free optical quantum memory
NASA Astrophysics Data System (ADS)
Kaczmarek, K. T.; Ledingham, P. M.; Brecht, B.; Thomas, S. E.; Thekkadath, G. S.; Lazo-Arjona, O.; Munns, J. H. D.; Poem, E.; Feizpour, A.; Saunders, D. J.; Nunn, J.; Walmsley, I. A.
2018-04-01
Optical quantum memories are devices that store and recall quantum light and are vital to the realization of future photonic quantum networks. To date, much effort has been put into improving storage times and efficiencies of such devices to enable long-distance communications. However, less attention has been devoted to building quantum memories which add zero noise to the output. Even small additional noise can render the memory classical by destroying the fragile quantum signatures of the stored light. Therefore, noise performance is a critical parameter for all quantum memories. Here we introduce an intrinsically noise-free quantum memory protocol based on two-photon off-resonant cascaded absorption (ORCA). We demonstrate successful storage of GHz-bandwidth heralded single photons in a warm atomic vapor with no added noise, confirmed by the unaltered photon-number statistics upon recall. Our ORCA memory meets the stringent noise requirements for quantum memories while combining high-speed and room-temperature operation with technical simplicity, and therefore is immediately applicable to low-latency quantum networks.
Contextual Advantage for State Discrimination
NASA Astrophysics Data System (ADS)
Schmid, David; Spekkens, Robert W.
2018-02-01
Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.
Interactive simulations for quantum key distribution
NASA Astrophysics Data System (ADS)
Kohnle, Antje; Rizzoli, Aluna
2017-05-01
Secure communication protocols are becoming increasingly important, e.g. for internet-based communication. Quantum key distribution (QKD) allows two parties, commonly called Alice and Bob, to generate a secret sequence of 0s and 1s called a key that is only known to themselves. Classically, Alice and Bob could never be certain that their communication was not compromised by a malicious eavesdropper. Quantum mechanics however makes secure communication possible. The fundamental principle of quantum mechanics that taking a measurement perturbs the system (unless the measurement is compatible with the quantum state) also applies to an eavesdropper. Using appropriate protocols to create the key, Alice and Bob can detect the presence of an eavesdropper by errors in their measurements. As part of the QuVis Quantum Mechanics Visualisation Project, we have developed a suite of four interactive simulations that demonstrate the basic principles of three different QKD protocols. The simulations use either polarised photons or spin 1/2 particles as physical realisations. The simulations and accompanying activities are freely available for use online or download, and run on a wide range of devices including tablets and PCs. Evaluation with students over three years was used to refine the simulations and activities. Preliminary studies show that the refined simulations and activities help students learn the basic principles of QKD at both the introductory and advanced undergraduate levels.
NASA Technical Reports Server (NTRS)
Griebeler, Elmer L.
2011-01-01
Binary communication through long cables, opto-isolators, isolating transformers, or repeaters can become distorted in characteristic ways. The usual solution is to slow the communication rate, change to a different method, or improve the communication media. It would help if the characteristic distortions could be accommodated at the receiving end to ease the communication problem. The distortions come from loss of the high-frequency content, which adds slopes to the transitions from ones to zeroes and zeroes to ones. This weakens the definition of the ones and zeroes in the time domain. The other major distortion is the reduction of low frequency, which causes the voltage that defines the ones or zeroes to drift out of recognizable range. This development describes a method for recovering a binary data stream from a signal that has been subjected to a loss of both higher-frequency content and low-frequency content that is essential to define the difference between ones and zeroes. The method makes use of the frequency structure of the waveform created by the data stream, and then enhances the characteristics related to the data to reconstruct the binary switching pattern. A major issue is simplicity. The approach taken here is to take the first derivative of the signal and then feed it to a hysteresis switch. This is equivalent in practice to using a non-resonant band pass filter feeding a Schmitt trigger. Obviously, the derivative signal needs to be offset to halfway between the thresholds of the hysteresis switch, and amplified so that the derivatives reliably exceed the thresholds. A transition from a zero to a one is the most substantial, fastest plus movement of voltage, and therefore will create the largest plus first derivative pulse. Since the quiet state of the derivative is sitting between the hysteresis thresholds, the plus pulse exceeds the plus threshold, switching the hysteresis switch plus, which re-establishes the data zero to one transition, except now at the logic levels of the receiving circuit. Similarly, a transition from a one to a zero will be the most substantial and fastest minus movement of voltage and therefore will create the largest minus first derivative pulse. The minus pulse exceeds the minus threshold, switching the hysteresis switch minus, which re-establishes the data one to zero transition. This innovation has a large increase in tolerance for the degradation of the binary pattern of ones and zeroes, and can reject the introduction of noise in the form of low frequencies that can cause the voltage pattern to drift up or down, and also higher frequencies that are beyond the recognizable content in the binary transitions.
NASA Astrophysics Data System (ADS)
Wang, Wenji; Zhao, Yi
2017-07-01
Methane dissociation is a prototypical system for the study of surface reaction dynamics. The dissociation and recombination rates of CH4 through the Ni(111) surface are calculated by using the quantum instanton method with an analytical potential energy surface. The Ni(111) lattice is treated rigidly, classically, and quantum mechanically so as to reveal the effect of lattice motion. The results demonstrate that it is the lateral displacements rather than the upward and downward movements of the surface nickel atoms that affect the rates a lot. Compared with the rigid lattice, the classical relaxation of the lattice can increase the rates by lowering the free energy barriers. For instance, at 300 K, the dissociation and recombination rates with the classical lattice exceed the ones with the rigid lattice by 6 and 10 orders of magnitude, respectively. Compared with the classical lattice, the quantum delocalization rather than the zero-point energy of the Ni atoms further enhances the rates by widening the reaction path. For instance, the dissociation rate with the quantum lattice is about 10 times larger than that with the classical lattice at 300 K. On the rigid lattice, due to the zero-point energy difference between CH4 and CD4, the kinetic isotope effects are larger than 1 for the dissociation process, while they are smaller than 1 for the recombination process. The increasing kinetic isotope effect with decreasing temperature demonstrates that the quantum tunneling effect is remarkable for the dissociation process.
Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation
NASA Technical Reports Server (NTRS)
Doremus, R. H.
1982-01-01
It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.
NASA Astrophysics Data System (ADS)
Ganji, Farid
This dissertation presents novel nonlinear adaptive formation controllers for a heterogeneous group of holonomic planetary exploration rovers navigating over flat terrains with unknown soil types and surface conditions. A leader-follower formation control architecture is employed. In the first part, using a point-mass model for robots and a Coulomb-viscous friction model for terrain resistance, direct adaptive control laws and a formation speed-adaptation strategy are developed for formation navigation over unknown and changing terrain in the presence of actuator saturation. On-line estimates of terrain frictional parameters compensate for unknown terrain resistance and its variations. In saturation events over difficult terrain, the formation speed is reduced based on the speed of the slowest saturated robot, using internal fleet communication and a speed-adaptation strategy, so that the formation error stays bounded and small. A formal proof for asymptotic stability of the formation system in non-saturated conditions is given. The performance of robot controllers are verified using a modular 3-robot formation simulator. Simulations show that the formation errors reduce to zero asymptotically under non-saturated conditions as is guaranteed by the theoretical proof. In the second part, the proposed adaptive control methodology is extended for formation control of a class of omnidirectional rovers with three independently-driven universal holonomic rigid wheels, where the rovers' rigid-body dynamics, drive-system electromechanical characteristics, and wheel-ground interaction mechanics are incorporated. Holonomic rovers have the ability to move simultaneously and independently in translation and rotation, rendering great maneuverability and agility, which makes them suitable for formation navigation. Novel nonlinear adaptive control laws are designed for the input voltages of the three wheel-drive motors. The motion resistance, which is due to the sinkage of rover wheels in soft planetary terrain, is modeled using classical terramechanics theory. The unknown system parameters for adaptive estimation pertain to the rolling resistance forces and scrubbing resistance torques at the wheel-terrain interfaces. Novel terramechanical formulas for terrain resistance forces and torques are derived via considering the universal holonomic wheels as rigid toroidal wheels moving forward and/or sideways as well as turning on soft ground. The asymptotic stability of the formation control system is rigorously proved using Lyapunov's direct method.
DOT National Transportation Integrated Search
1998-08-01
The purpose of this study was to identify the factors that contribute to pilot-controller communication errors. Resports submitted to the Aviation Safety Reporting System (ASRS) offer detailed accounts of specific types of errors and a great deal of ...
"Apologies" from pathologists: why, when, and how to say "sorry" after committing a medical error.
Dewar, Rajan; Parkash, Vinita; Forrow, Lachlan; Truog, Robert D
2014-05-01
How pathologists communicate an error is complicated by the absence of a direct physician-patient relationship. Using 2 examples, we elaborate on how other physician colleagues routinely play an intermediary role in our day-to-day transactions and in the communication of a pathologist error to the patient. The concept of a "dual-hybrid" mind-set in the intermediary physician and its role in representing the pathologists' viewpoint adequately is considered. In a dual-hybrid mind-set, the intermediary physician can align with the patients' philosophy and like the patient, consider the smallest deviation from norm to be an error. Alternatively, they might embrace the traditional physician philosophy and communicate only those errors that resulted in a clinically inappropriate outcome. Neither may effectively reflect the pathologists' interests. We propose that pathologists develop strategies to communicate errors that include considerations of meeting with the patients directly. Such interactions promote healing for the patient and are relieving to the well-intentioned pathologist.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-21
... designed to protect investors and the public interest. Granting Market Makers more time to request a review... addresses errors in series with zero or no bid. Specifically, the Exchange proposes replacing reference to ``series quoted no bid on the Exchange'' with ``series where the NBBO bid is zero.'' This is being done to...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-21
... addresses errors in series with zero or no bid. Specifically, the Exchange proposes replacing reference to ``series quoted no bid on the Exchange'' with ``series where the NBBO bid is zero.'' This is being done to... Exchange proposes to amend the times in which certain ATP Holders are required to notify the Exchange in...
2006-03-01
included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable
On-chip WDM mode-division multiplexing interconnection with optional demodulation function.
Ye, Mengyuan; Yu, Yu; Chen, Guanyu; Luo, Yuchan; Zhang, Xinliang
2015-12-14
We propose and fabricate a wavelength-division-multiplexing (WDM) compatible and multi-functional mode-division-multiplexing (MDM) integrated circuit, which can perform the mode conversion and multiplexing for the incoming multipath WDM signals, avoiding the wavelength conflict. An phase-to-intensity demodulation function can be optionally applied within the circuit while performing the mode multiplexing. For demonstration, 4 × 10 Gb/s non-return-to-zero differential phase shift keying (NRZ-DPSK) signals are successfully processed, with open and clear eye diagrams. Measured bit error ratio (BER) results show less than 1 dB receive sensitivity variation for three modes and four wavelengths with demodulation. In the case without demodulation, the average power penalties at 4 wavelengths are -1.5, -3 and -3.5 dB for TE₀-TE₀, TE₀-TE₁ and TE₀-TE₂ mode conversions, respectively. The proposed flexible scheme can be used at the interface of long-haul and on-chip communication systems.
Retrieving the polarization information for satellite-to-ground light communication
NASA Astrophysics Data System (ADS)
Tao, Qiangqiang; Guo, Zhongyi; Xu, Qiang; Jiao, Weiyan; Wang, Xinshun; Qu, Shiliang; Gao, Jun
2015-08-01
In this paper, we have investigated the reconstruction of the polarization states (degree of polarization (DoP) and angle of polarization (AoP)) of the incident light which passed through a 10 km atmospheric medium between the satellite and the Earth. Here, we proposed a more practical atmospheric model in which the 10 km atmospheric medium is divided into ten layers to be appropriate for the Monte Carlo simulation algorithm. Based on this model, the polarization retrieve (PR) method can be used for reconstructing the initial polarization information effectively, and the simulated results demonstrate that the mean errors of the retrieved DoP and AoP are very close to zero. Moreover, the results also show that although the atmospheric medium system is fixed, the Mueller matrices for the downlink and uplink are completely different, which shows that the light transmissions in the two links are irreversible in the layered atmospheric medium system.
Suez Canal Clearance Operation, Task Force 65
1975-05-01
supporting minesweepers, GARDENIA, GIROFLEE, AJONC, and LILAS, and two minehunters CERES and CALLIOPE. TG SIX FIVE POINT ZERO . This Task Group designation...circle search line, buoy line, tether, zero visibility, and no communication with the surface, created a hazardous situation for open circuit scuba...from essentially zero to several hut7Ared thousand in Port Said and Suez City, and to a lesser degree in Ismailia. This occurred without a concomitant
The advanced progress of precoding technology in 5g system
NASA Astrophysics Data System (ADS)
An, Chenyi
2017-09-01
With the development of technology, people began to put forward higher requirements for the mobile system, the emergence of the 5G subvert the track of the development of mobile communication technology. In the research of the core technology of 5G mobile communication, large scale MIMO, and precoding technology is a research hotspot. At present, the research on precoding technology in 5G system analyzes the various methods of linear precoding, the maximum ratio transmission (MRT) precoding algorithm, zero forcing (ZF) precoding algorithm, minimum mean square error (MMSE) precoding algorithm based on maximum signal to leakage and noise ratio (SLNR). Precoding algorithms are analyzed and summarized in detail. At the same time, we also do some research on nonlinear precoding methods, such as dirty paper precoding, THP precoding algorithm and so on. Through these analysis, we can find the advantages and disadvantages of each algorithm, as well as the development trend of each algorithm, grasp the development of the current 5G system precoding technology. Therefore, the research results and data of this paper can be used as reference for the development of precoding technology in 5G system.
On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound
NASA Astrophysics Data System (ADS)
Li, Ruihu; Li, Xueliang; Guo, Luobin
2015-12-01
The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
NASA Technical Reports Server (NTRS)
Natarajan, Suresh; Gardner, C. S.
1987-01-01
Receiver timing synchronization of an optical Pulse-Position Modulation (PPM) communication system can be achieved using a phased-locked loop (PLL), provided the photodetector output is suitably processed. The magnitude of the PLL phase error is a good indicator of the timing error at the receiver decoder. The statistics of the phase error are investigated while varying several key system parameters such as PPM order, signal and background strengths, and PPL bandwidth. A practical optical communication system utilizing a laser diode transmitter and an avalanche photodiode in the receiver is described, and the sampled phase error data are presented. A linear regression analysis is applied to the data to obtain estimates of the relational constants involving the phase error variance and incident signal power.
Satellite-based quantum communication terminal employing state-of-the-art technology
NASA Astrophysics Data System (ADS)
Pfennigbauer, Martin; Aspelmeyer, Markus; Leeb, Walter R.; Baister, Guy; Dreischer, Thomas; Jennewein, Thomas; Neckamm, Gregor; Perdigues, Josep M.; Weinfurter, Harald; Zeilinger, Anton
2005-09-01
Feature Issue on Optical Wireless Communications (OWC) We investigate the design and the accommodation of a quantum communication transceiver in an existing classical optical communication terminal on board a satellite. Operation from a low earth orbit (LEO) platform (e.g., the International Space Station) would allow transmission of single photons and pairs of entangled photons to ground stations and hence permit quantum communication applications such as quantum cryptography on a global scale. Integration of a source generating entangled photon pairs and single-photon detection into existing optical terminal designs is feasible. Even more, major subunits of the classical terminals such as those for pointing, acquisition, and tracking as well as those providing the required electronic, thermal, and structural backbone can be adapted so as to meet the quantum communication terminal needs.
Broad, Joanna; Wells, Sue; Marshall, Roger; Jackson, Rod
2007-01-01
Background Most blood pressure recordings end with a zero end-digit despite guidelines recommending measurement to the nearest 2 mmHg. The impact of rounding on management of cardiovascular disease (CVD) risk is unknown. Aim To document the use of rounding to zero end-digit and assess its potential impact on eligibility for pharmacologic management of CVD risk. Design of study Cross-sectional study. Setting A total of 23 676 patients having opportunistic CVD risk assessment in primary care practices in New Zealand. Method To simulate rounding in practice, for patients with systolic blood pressures recorded without a zero end-digit, a second blood pressure measure was generated by arithmetically rounding to the nearest zero end-digit. A 10-year Framingham CVD risk score was estimated using actual and rounded blood pressures. Eligibility for pharmacologic treatment was then determined using the Joint British Societies' JBS2 and the British Hypertension Society BHS–IV guidelines based on actual and rounded blood pressure values. Results Zero end-digits were recorded in 64% of systolic and 62% of diastolic blood pressures. When eligibility for drug treatment was based only on a Framingham 10-year CVD risk threshold of 20% or more, rounding misclassified one in 41 of all those patients subject to this error. Under the two guidelines which use different combinations of CVD risk and blood pressure thresholds, one in 19 would be misclassified under JBS2 and one in 12 under the BHS–IV guidelines mostly towards increased treatment. Conclusion Zero end-digit preference significantly increases a patient's likelihood of being classified as eligible for drug treatment. Guidelines that base treatment decisions primarily on absolute CVD risk are less susceptible to these errors. PMID:17976291
Broad, Joanna; Wells, Sue; Marshall, Roger; Jackson, Rod
2007-11-01
Most blood pressure recordings end with a zero end-digit despite guidelines recommending measurement to the nearest 2 mmHg. The impact of rounding on management of cardiovascular disease (CVD) risk is unknown. To document the use of rounding to zero end-digit and assess its potential impact on eligibility for pharmacologic management of CVD risk. Cross-sectional study. A total of 23,676 patients having opportunistic CVD risk assessment in primary care practices in New Zealand. To simulate rounding in practice, for patients with systolic blood pressures recorded without a zero end-digit, a second blood pressure measure was generated by arithmetically rounding to the nearest zero end-digit. A 10-year Framingham CVD risk score was estimated using actual and rounded blood pressures. Eligibility for pharmacologic treatment was then determined using the Joint British Societies' JBS2 and the British Hypertension Society BHS-IV guidelines based on actual and rounded blood pressure values. Zero end-digits were recorded in 64% of systolic and 62% of diastolic blood pressures. When eligibility for drug treatment was based only on a Framingham 10year CVD risk threshold of 20% or more, rounding misclassified one in 41 of all those patients subject to this error. Under the two guidelines which use different combinations of CVD risk and blood pressure thresholds, one in 19 would be misclassified under JBS2 and one in 12 under the BHS-IV guidelines mostly towards increased treatment. Zero end-digit preference significantly increases a patient's likelihood of being classified as eligible for drug treatment. Guidelines that base treatment decisions primarily on absolute CVD risk are less susceptible to these errors.
Normal forms of Hopf-zero singularity
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh
2015-01-01
The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.
Impact of Measurement Error on Statistical Power: Review of an Old Paradox.
ERIC Educational Resources Information Center
Williams, Richard H.; And Others
1995-01-01
The paradox that a Student t-test based on pretest-posttest differences can attain its greatest power when the difference score reliability is zero was explained by demonstrating that power is not a mathematical function of reliability unless either true score variance or error score variance is constant. (SLD)
Biases and Standard Errors of Standardized Regression Coefficients
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2011-01-01
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…
de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique
2016-10-01
The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.
Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela
2018-01-01
This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.
Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen
2018-01-01
The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041
Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen
2018-04-24
The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
Cultural Variability in Crew Discourse
NASA Technical Reports Server (NTRS)
Fischer, Ute
1999-01-01
Four studies were conducted to determine features of effective crew communication in response to errors during flight. Study One examined whether US captains and first officers use different communication strategies to correct errors and problems on the flight deck, and whether their communications are affected by the two situation variables, level of risk and degree of face-threat involved in challenging an error. Study Two was the cross-cultural extension of Study One and involved pilots from three European countries. Study Three compared communication strategies of female and male air carrier pilots who were matched in terms of years and type of aircraft experience. The final study assessed the effectiveness of the communication strategies observed in Study One.
Objectified quantification of uncertainties in Bayesian atmospheric inversions
NASA Astrophysics Data System (ADS)
Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.
2015-05-01
Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Quantum Secure Conditional Direct Communication via EPR Pairs
NASA Astrophysics Data System (ADS)
Gao, Ting; Yan, Fengli; Wang, Zhixi
Two schemes for quantum secure conditional direct communication are proposed, where a set of EPR pairs of maximally entangled particles in Bell states, initially made by the supervisor Charlie, but shared by the sender Alice and the receiver Bob, functions as quantum information channels for faithful transmission. After insuring the security of the quantum channel and obtaining the permission of Charlie (i.e., Charlie is trustworthy and cooperative, which means the "conditional" in the two schemes), Alice and Bob begin their private communication under the control of Charlie. In the first scheme, Alice transmits secret message to Bob in a deterministic manner with the help of Charlie by means of Alice's local unitary transformations, both Alice and Bob's local measurements, and both of Alice and Charlie's public classical communication. In the second scheme, the secure communication between Alice and Bob can be achieved via public classical communication of Charlie and Alice, and the local measurements of both Alice and Bob. The common feature of these protocols is that the communications between two communication parties Alice and Bob depend on the agreement of the third side Charlie. Moreover, transmitting one bit secret message, the sender Alice only needs to apply a local operation on her one qubit and send one bit classical information. We also show that the two schemes are completely secure if quantum channels are perfect.
Performance of a laser microsatellite network with an optical preamplifier.
Arnon, Shlomi
2005-04-01
Laser satellite communication (LSC) uses free space as a propagation medium for various applications, such as intersatellite communication or satellite networking. An LSC system includes a laser transmitter and an optical receiver. For communication to occur, the line of sight of the transmitter and the receiver must be aligned. However, mechanical vibration and electronic noise in the control system reduce alignment between the transmitter laser beam and the receiver field of view (FOV), which results in pointing errors. The outcome of pointing errors is fading of the received signal, which leads to impaired link performance. An LSC system is considered in which the optical preamplifier is incorporated into the receiver, and a bit error probability (BEP) model is derived that takes into account the statistics of the pointing error as well as the optical amplifier and communication system parameters. The model and the numerical calculation results indicate that random pointing errors of sigma(chi)2G > 0.05 penalize communication performance dramatically for all combinations of optical amplifier gains and noise figures that were calculated.
Small massless excitations against a nontrivial background
NASA Astrophysics Data System (ADS)
Khariton, N. G.; Svetovoy, V. B.
1994-03-01
We propose a systematic approach for finding bosonic zero modes of nontrivial classical solutions in a gauge theory. The method allows us to find all the modes connected with the broken space-time and gauge symmetries. The ground state is supposed to be dependent on some space coordinates yα and independent of the rest of the coordinates xi. The main problem which is solved is how to construct the zero modes corresponding to the broken xiyα rotations in vacuum and which boundary conditions specify them. It is found that the rotational modes are typically singular at the origin or at infinity, but their energy remains finite. They behave as massless vector fields in x space. We analyze local and global symmetries affecting the zero modes. An algorithm for constructing the zero mode excitations is formulated. The main results are illustrated in the Abelian Higgs model with the string background.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Speed Versus Accuracy: A Zero Sum Game
2009-05-11
sacrificed to mitigate the risk to credibility. In Ongoing Crisis Communication : Planning, Managing, and Responding, Timothy W. Coombs presents speed in...Carlson and Abelson, 24. 10 Timothy W. Coombs , Crisis Management and Communication , http://www.instituteforpr.org/essential_knowledge/detail...crisi_management_and_communication s (accessed 30 April 2009). 11 Timothy W. Coombs , Ongoing Crisis Communication : Planning, Managing, and Responding, 2nd
Accurate Singular Values and Differential QD Algorithms
1992-07-01
of the Cholesky Algorithm 5 4 The Quotient Difference Algorithm 8 5 Incorporation of Shifts 11 5.1 Shifted qd Algorithms...Effects of Finite Precision 18 7.1 Error Analysis - Overview ........ ........................... 18 7.2 High Relative Accuracy in the Presence of...showing that it was preferable to replace the DK zero-shift QR transform by two steps of zero-shift LR implemented in a qd (quotient- difference ) format
Quantum error correction assisted by two-way noisy communication
Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C. H.
2014-01-01
Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1
Quantum error correction assisted by two-way noisy communication.
Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C H
2014-11-26
Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1
Why do adult dogs (Canis familiaris) commit the A-not-B search error?
Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József
2014-02-01
It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.
High resolution study of magnetic ordering at absolute zero.
Lee, M; Husmann, A; Rosenbaum, T F; Aeppli, G
2004-05-07
High resolution pressure measurements in the zero-temperature limit provide a unique opportunity to study the behavior of strongly interacting, itinerant electrons with coupled spin and charge degrees of freedom. Approaching the precision that has become the hallmark of experiments on classical critical phenomena, we characterize the quantum critical behavior of the model, elemental antiferromagnet chromium, lightly doped with vanadium. We resolve the sharp doubling of the Hall coefficient at the quantum critical point and trace the dominating effects of quantum fluctuations up to surprisingly high temperatures.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
NASA Astrophysics Data System (ADS)
Li, Xin; Hong, Yifeng; Wang, Jinfang; Liu, Yang; Sun, Xun; Li, Mi
2018-01-01
Numerous communication techniques and optical devices successfully applied in space optical communication system indicates a good portability of it. With this good portability, typical coherent demodulation technique of Costas loop can be easily adopted in space optical communication system. As one of the components of pointing error, the effect of jitter plays an important role in the communication quality of such system. Here, we obtain the probability density functions (PDF) of different jitter degrees and explain their essential effect on the bit error rate (BER) space optical communication system. Also, under the effect of jitter, we research the bit error rate of space coherent optical communication system using Costas loop with different system parameters of transmission power, divergence angle, receiving diameter, avalanche photodiode (APD) gain, and phase deviation caused by Costas loop. Through a numerical simulation of this kind of communication system, we demonstrate the relationship between the BER and these system parameters, and some corresponding methods of system optimization are presented to enhance the communication quality.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Quantum steganography and quantum error-correction
NASA Astrophysics Data System (ADS)
Shaw, Bilal A.
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.
The Effects of Tense Continuity and Subject-Verb Agreement Errors on Communication.
ERIC Educational Resources Information Center
Porton, Vicki M.
This study explored the dichotomy between global errors, that is, those violating rules of overall sentence structure, and local errors, that is, those violating rules within a particular constituent of a sentence, and the relationship of these to communication breakdown. The focus was tense continuity across clauses (TC) and subject-verb…
Damage Initiation in Two-Dimensional, Woven, Carbon-Carbon Composites
1988-12-01
biaxial stress interaction were themselves a function of the applied biaxial stress ratio and thus the error in measuring F12 depended on F12. To find the...the supported directions. Discretizing the model will tend to induce error in the computed nodal displacements when compared to an exact continuum...solution, however, for an increasing number of elements in the structural model, the net error should converge to zero (3:94). The inherent flexibility in
Dark Signal Characterization of 1.7 micron cutoff devices for SNAP
NASA Astrophysics Data System (ADS)
Smith, R. M.; SNAP Collaboration
2004-12-01
We report initial progress characterizing non-photometric sources of error -- dark current, noise, and zero point drift -- for 1.7 micron cutoff HgCdTe and InGaAs detectors under development by Raytheon, Rockwell, and Sensors Unlimited for SNAP. Dark current specifications can already be met with several detector types. Changes to the manufacturing process are being explored to improve the noise reduction available through multiple sampling. In some cases, a significant number of pixels suffer from popcorn noise, with a few percent of all pixels exhibiting a ten fold noise increase. A careful study of zero point drifts is also under way, since these errors can dominate dark current, and may contribute to the noise degradation seen in long exposures.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
2012-06-01
µV RTI (relative to input) or ±1.0 mV RTO (relative to output). On a gain of 1,000, this is ±2.0 mV or ±0.02% FS and is a systematic error. Zero...Stability, Time: ±5 µV RTI, ±1.0 mV RTO . On a gain of 1,000, this is ±6 mV or ±0.06% FS. However, since the manufacturer’s specification is for 1-year...This is a random error assumed to be normally distributed. Zero Stability, Temperature: ±1 µV RTI, ±0.2 mV RTO /°C. On a gain of 1,000 and for a
A zero-error operational video data compression system
NASA Technical Reports Server (NTRS)
Kutz, R. L.
1973-01-01
A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.
Modular Battery Charge Controller
NASA Technical Reports Server (NTRS)
Button, Robert; Gonzalez, Marcelo
2009-01-01
A new approach to masterless, distributed, digital-charge control for batteries requiring charge control has been developed and implemented. This approach is required in battery chemistries that need cell-level charge control for safety and is characterized by the use of one controller per cell, resulting in redundant sensors for critical components, such as voltage, temperature, and current. The charge controllers in a given battery interact in a masterless fashion for the purpose of cell balancing, charge control, and state-of-charge estimation. This makes the battery system invariably fault-tolerant. The solution to the single-fault failure, due to the use of a single charge controller (CC), was solved by implementing one CC per cell and linking them via an isolated communication bus [e.g., controller area network (CAN)] in a masterless fashion so that the failure of one or more CCs will not impact the remaining functional CCs. Each micro-controller-based CC digitizes the cell voltage (V(sub cell)), two cell temperatures, and the voltage across the switch (V); the latter variable is used in conjunction with V(sub cell) to estimate the bypass current for a given bypass resistor. Furthermore, CC1 digitizes the battery current (I1) and battery voltage (V(sub batt) and CC5 digitizes a second battery current (I2). As a result, redundant readings are taken for temperature, battery current, and battery voltage through the summation of the individual cell voltages given that each CC knows the voltage of the other cells. For the purpose of cell balancing, each CC periodically and independently transmits its cell voltage and stores the received cell voltage of the other cells in an array. The position in the array depends on the identifier (ID) of the transmitting CC. After eight cell voltage receptions, the array is checked to see if one or more cells did not transmit. If one or more transmissions are missing, the missing cell(s) is (are) eliminated from cell-balancing calculations. The cell-balancing algorithm is based on the error between the cell s voltage and the other cells and is categorized into four zones of operation. The algorithm is executed every second and, if cell balancing is activated, the error variable is set to a negative low value. The largest error between the cell and the other cells is found and the zone of operation determined. If the error is zero or negative, then the cell is at the lowest voltage and no balancing action is needed. If the error is less than a predetermined negative value, a Cell Bad Flag is set. If the error is positive, then cell balancing is needed, but a hysteretic zone is added to prevent the bypass circuit from triggering repeatedly near zero error. This approach keeps the cells within a predetermined voltage range.
Zero-point energy conservation in classical trajectory simulations: Application to H2CO
NASA Astrophysics Data System (ADS)
Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.
2018-05-01
A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.
A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors
Liu, Shibin
2018-01-01
Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448
Preparation and measurement of three-qubit entanglement in a superconducting circuit.
Dicarlo, L; Reed, M D; Sun, L; Johnson, B R; Chow, J M; Gambetta, J M; Frunzio, L; Girvin, S M; Devoret, M H; Schoelkopf, R J
2010-09-30
Traditionally, quantum entanglement has been central to foundational discussions of quantum mechanics. The measurement of correlations between entangled particles can have results at odds with classical behaviour. These discrepancies grow exponentially with the number of entangled particles. With the ample experimental confirmation of quantum mechanical predictions, entanglement has evolved from a philosophical conundrum into a key resource for technologies such as quantum communication and computation. Although entanglement in superconducting circuits has been limited so far to two qubits, the extension of entanglement to three, eight and ten qubits has been achieved among spins, ions and photons, respectively. A key question for solid-state quantum information processing is whether an engineered system could display the multi-qubit entanglement necessary for quantum error correction, which starts with tripartite entanglement. Here, using a circuit quantum electrodynamics architecture, we demonstrate deterministic production of three-qubit Greenberger-Horne-Zeilinger (GHZ) states with fidelity of 88 per cent, measured with quantum state tomography. Several entanglement witnesses detect genuine three-qubit entanglement by violating biseparable bounds by 830 ± 80 per cent. We demonstrate the first step of basic quantum error correction, namely the encoding of a logical qubit into a manifold of GHZ-like states using a repetition code. The integration of this encoding with decoding and error-correcting steps in a feedback loop will be the next step for quantum computing with integrated circuits.
Optical truss and retroreflector modeling for picometer laser metrology
NASA Astrophysics Data System (ADS)
Hines, Braden E.
1993-09-01
Space-based astrometric interferometer concepts typically have a requirement for the measurement of the internal dimensions of the instrument to accuracies in the picometer range. While this level of resolution has already been achieved for certain special types of laser gauges, techniques for picometer-level accuracy need to be developed to enable all the various kinds of laser gauges needed for space-based interferometers. Systematic errors due to retroreflector imperfections become important as soon as the retroreflector is allowed to either translate in position or articulate in angle away from its nominal zero-point. Also, when combining several laser interferometers to form a three-dimensional laser gauge (a laser optical truss), systematic errors due to imperfect knowledge of the truss geometry are important as the retroreflector translates away from its nominal zero-point. In order to assess the astrometric performance of a proposed instrument, it is necessary to determine how the effects of an imperfect laser metrology system impact the astrometric accuracy. This paper show the development of an error propagation model from errors in the 1-D metrology measurements through the impact on the overall astrometric accuracy for OSI. Simulations are then presented based on this development which were used to define a multiplier which determines the 1-D metrology accuracy required to produce a given amount of fringe position error.
Improved Airborne Gravity Results Using New Relative Gravity Sensor Technology
NASA Astrophysics Data System (ADS)
Brady, N.
2013-12-01
Airborne gravity data has contributed greatly to our knowledge of subsurface geophysics particularly in rugged and otherwise inaccessible areas such as Antarctica. Reliable high quality GPS data has renewed interest in improving the accuracy of airborne gravity systems and recent improvements in the electronic control of the sensor have increased the accuracy and ability of the classic Lacoste and Romberg zero length spring gravity meters to operate in turbulent air conditions. Lacoste and Romberg type gravity meters provide increased sensitivity over other relative gravity meters by utilizing a mass attached to a horizontal beam which is balanced by a ';zero length spring'. This type of dynamic gravity sensor is capable of measuring gravity changes on the order of 0.05 milliGals in laboratory conditions but more commonly 0.7 to 1 milliGal in survey use. The sensor may have errors induced by the electronics used to read the beam position as well as noise induced by unwanted accelerations, commonly turbulence, which moves the beam away from its ideal balance position otherwise known as the reading line. The sensor relies on a measuring screw controlled by a computer which attempts to bring the beam back to the reading line position. The beam is also heavily damped so that it does not react to most unwanted high frequency accelerations. However this heavily damped system is slow to react, particularly in turns where there are very high Eotvos effects. New sensor technology utilizes magnetic damping of the beam coupled with an active feedback system which acts to effectively keep the beam locked at the reading line position. The feedback system operates over the entire range of the system so there is now no requirement for a measuring screw. The feedback system operates at very high speed so that even large turbulent events have minimal impact on data quality and very little, if any, survey line data is lost because of large beam displacement errors. Airborne testing along with results from ground based van testing and laboratory results have shown that the new sensor provides more consistent gravity data, as measured by repeated line surveys, as well as preserving the inherent sensitivity of the Lacoste and Romberg zero length spring design. The sensor also provides reliability during survey operation as there is no mechanical counter screw. Results will be presented which show the advantages of the new sensor system over the current technology in both data quality and survey productivity. Applications include high resolution geoid mapping, crustal structure investigations and resource mapping of minerals, oil and gas.
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Line of sight pointing technology for laser communication system between aircrafts
NASA Astrophysics Data System (ADS)
Zhao, Xin; Liu, Yunqing; Song, Yansong
2017-12-01
In space optical communications, it is important to obtain the most efficient performance of line of sight (LOS) pointing system. The errors of position (latitude, longitude, and altitude), attitude angles (pitch, yaw, and roll), and installation angle among a different coordinates system are usually ineluctable when assembling and running an aircraft optical communication terminal. These errors would lead to pointing errors and make it difficult for the LOS system to point to its terminal to establish a communication link. The LOS pointing technology of an aircraft optical communication system has been researched using a transformation matrix between the coordinate systems of two aircraft terminals. A method of LOS calibration has been proposed to reduce the pointing error. In a flight test, a successful 144-km link was established between two aircrafts. The position and attitude angles of the aircraft have been obtained to calculate the pointing angle in azimuth and elevation provided by using a double-antenna GPS/INS system. The size of the field of uncertainty (FOU) and the pointing accuracy are analyzed based on error theory, and it has been also measured using an observation camera installed next to the optical LOS. Our results show that the FOU of aircraft optical communications is 10 mrad without a filter, which is the foundation to acquisition strategy and scanning time.
Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.
Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P
2016-03-01
Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
NASA Astrophysics Data System (ADS)
Su, Zhaofeng; Guan, Ji; Li, Lvzhou
2018-01-01
Quantum entanglement is an indispensable resource for many significant quantum information processing tasks. However, in practice, it is difficult to distribute quantum entanglement over a long distance, due to the absorption and noise in quantum channels. A solution to this challenge is a quantum repeater, which can extend the distance of entanglement distribution. In this scheme, the time consumption of classical communication and local operations takes an important place with respect to time efficiency. Motivated by this observation, we consider a basic quantum repeater scheme that focuses on not only the optimal rate of entanglement concentration but also the complexity of local operations and classical communication. First, we consider the case where two different two-qubit pure states are initially distributed in the scenario. We construct a protocol with the optimal entanglement-concentration rate and less consumption of local operations and classical communication. We also find a criterion for the projective measurements to achieve the optimal probability of creating a maximally entangled state between the two ends. Second, we consider the case in which two general pure states are prepared and general measurements are allowed. We get an upper bound on the probability for a successful measurement operation to produce a maximally entangled state without any further local operations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... whose rent has been waived or reduced to zero under § 2806.15 of this subpart: (1) BLM will exclude exempted uses or uses whose rent has been waived or reduced to zero (see §§ 2806.14 and 2806.15 of this... from rent or whose rent has been waived or reduced to zero (see §§ 2806.14 and 2806.15 of this subpart...
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.
2002-01-01
Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.
Adaptive control based on retrospective cost optimization
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)
2012-01-01
A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.
Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming
2011-01-01
Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207
Saccà, Maria Ludovica; Fajardo, Carmen; Costa, Gonzalo; Lobo, Carmen; Nande, Mar; Martin, Margarita
2014-06-01
Nanosized zero-valent iron (nZVI) is a new option for the remediation of contaminated soil and groundwater, but the effect of nZVI on soil biota is mostly unknown. In this work, nanotoxicological studies were performed in vitro and in two different standard soils to assess the effect of nZVI on autochthonous soil organisms by integrating classical and molecular analysis. Standardised ecotoxicity testing methods using Caenorhabditis elegans were applied in vitro and in soil experiments and changes in microbial biodiversity and biomarker gene expression were used to assess the responses of the microbial community to nZVI. The classical tests conducted in soil ruled out a toxic impact of nZVI on the soil nematode C. elegans in the test soils. The molecular analysis applied to soil microorganisms, however, revealed significant changes in the expression of the proposed biomarkers of exposure. These changes were related not only to the nZVI treatment but also to the soil characteristics, highlighting the importance of considering the soil matrix on a case by case basis. Furthermore, due to the temporal shift between transcriptional responses and the development of the corresponding phenotype, the molecular approach could anticipate adverse effects on environmental biota. Copyright © 2013 Elsevier Ltd. All rights reserved.
Horodecki, Michał; Oppenheim, Jonathan; Winter, Andreas
2005-08-04
Information--be it classical or quantum--is measured by the amount of communication needed to convey it. In the classical case, if the receiver has some prior information about the messages being conveyed, less communication is needed. Here we explore the concept of prior quantum information: given an unknown quantum state distributed over two systems, we determine how much quantum communication is needed to transfer the full state to one system. This communication measures the partial information one system needs, conditioned on its prior information. We find that it is given by the conditional entropy--a quantity that was known previously, but lacked an operational meaning. In the classical case, partial information must always be positive, but we find that in the quantum world this physical quantity can be negative. If the partial information is positive, its sender needs to communicate this number of quantum bits to the receiver; if it is negative, then sender and receiver instead gain the corresponding potential for future quantum communication. We introduce a protocol that we term 'quantum state merging' which optimally transfers partial information. We show how it enables a systematic understanding of quantum network theory, and discuss several important applications including distributed compression, noiseless coding with side information, multiple access channels and assisted entanglement distillation.
Locally indistinguishable subspaces spanned by three-qubit unextendible product bases
NASA Astrophysics Data System (ADS)
Duan, Runyao; Xin, Yu; Ying, Mingsheng
2010-03-01
We study the local distinguishability of general multiqubit states and show that local projective measurements and classical communication are as powerful as the most general local measurements and classical communication. Remarkably, this indicates that the local distinguishability of multiqubit states can be decided efficiently. Another useful consequence is that a set of orthogonal n-qubit states is locally distinguishable only if the summation of their orthogonal Schmidt numbers is less than the total dimension 2n. Employing these results, we show that any orthonormal basis of a subspace spanned by arbitrary three-qubit orthogonal unextendible product bases (UPB) cannot be exactly distinguishable by local operations and classical communication. This not only reveals another intrinsic property of three-qubit orthogonal UPB but also provides a class of locally indistinguishable subspaces with dimension 4. We also explicitly construct locally indistinguishable subspaces with dimensions 3 and 5, respectively. Similar to the bipartite case, these results on multipartite locally indistinguishable subspaces can be used to estimate the one-shot environment-assisted classical capacity of a class of quantum broadcast channels.
Entanglement-Assisted Communication System for NASA's Deep-Space Missions
NASA Technical Reports Server (NTRS)
Kwiat, Paul; Bernstein, Herb; Javadi, Hamid
2016-01-01
For this project we have studied various forms of quantum communication, and quantum-enhanced classical communication. In particular, we have performed the first realization of a novel quantum protocol, superdense teleportation. We have also showed that in some cases, the advantages of superdense coding (which enhances classical channel capacity by up to a factor of two) can be realized without the use of entanglement. Finally, we considered some more advanced protocols, with the goal to realize 'superactivation' - two entangled channels have capabilities beyond the sum of the individual channels-and conclude that more study is needed in this area.
NASA Astrophysics Data System (ADS)
Bonhommeau, David; Truhlar, Donald G.
2008-07-01
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Bonhommeau, David; Truhlar, Donald G
2008-07-07
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
The Importance of Relying on the Manual: Scoring Error Variance in the WISC-IV Vocabulary Subtest
ERIC Educational Resources Information Center
Erdodi, Laszlo A.; Richard, David C. S.; Hopwood, Christopher
2009-01-01
Classical test theory assumes that ability level has no effect on measurement error. Newer test theories, however, argue that the precision of a measurement instrument changes as a function of the examinee's true score. Research has shown that administration errors are common in the Wechsler scales and that subtests requiring subjective scoring…
Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.
Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J
2001-09-01
In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.
Achieving minimum-error discrimination of an arbitrary set of laser-light pulses
NASA Astrophysics Data System (ADS)
da Silva, Marcus P.; Guha, Saikat; Dutton, Zachary
2013-05-01
Laser light is widely used for communication and sensing applications, so the optimal discrimination of coherent states—the quantum states of light emitted by an ideal laser—has immense practical importance. Due to fundamental limits imposed by quantum mechanics, such discrimination has a finite minimum probability of error. While concrete optical circuits for the optimal discrimination between two coherent states are well known, the generalization to larger sets of coherent states has been challenging. In this paper, we show how to achieve optimal discrimination of any set of coherent states using a resource-efficient quantum computer. Our construction leverages a recent result on discriminating multicopy quantum hypotheses [Blume-Kohout, Croke, and Zwolak, arXiv:1201.6625]. As illustrative examples, we analyze the performance of discriminating a ternary alphabet and show how the quantum circuit of a receiver designed to discriminate a binary alphabet can be reused in discriminating multimode hypotheses. Finally, we show that our result can be used to achieve the quantum limit on the rate of classical information transmission on a lossy optical channel, which is known to exceed the Shannon rate of all conventional optical receivers.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
Semi-classical Reissner-Nordstrom model for the structure of charged leptons
NASA Technical Reports Server (NTRS)
Rosen, G.
1980-01-01
The lepton self-mass problem is examined within the framework of the quantum theory of electromagnetism and gravity. Consideration is given to the Reissner-Nordstrom solution to the Einstein-Maxwell classical field equations for an electrically charged mass point, and the WKB theory for a semiclassical system with total energy zero is used to obtain an expression for the Einstein-Maxwell action factor. The condition obtained is found to account for the observed mass values of the three charged leptons, and to be in agreement with the correspondence principle.
Managing the spatial properties and photon correlations in squeezed non-classical twisted light
NASA Astrophysics Data System (ADS)
Zakharov, R. V.; Tikhonova, O. V.
2018-05-01
Spatial photon correlations and mode content of the squeezed vacuum light generated in a system of two separated nonlinear crystals is investigated. The contribution of both the polar and azimuthal modes with non-zero orbital angular momentum is analyzed. The control and engineering of the spatial properties and degree of entanglement of the non-classical squeezed light by changing the distance between crystals and pump parameters is demonstrated. Methods for amplification of certain spatial modes and managing the output mode content and intensity profile of quantum twisted light are suggested.
Error Characterization of Flight Trajectories Reconstructed Using Structure from Motion
2015-03-27
adjustment using IMU rotation information, the accuracy of the yaw, pitch and roll is limited and numerical errors can be as high as 1e-4 depending on...due to either zero mean, Gaussian noise and/or bias in the IMU measured yaw, pitch and roll angles. It is possible that when errors in these...requires both the information on how the camera is mounted to the IMU /aircraft and the measured yaw, pitch and roll at the time of the first image
Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-01-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Evolutionary dynamics of a smoothed war of attrition game.
Iyer, Swami; Killingback, Timothy
2016-05-07
In evolutionary game theory the War of Attrition game is intended to model animal contests which are decided by non-aggressive behavior, such as the length of time that a participant will persist in the contest. The classical War of Attrition game assumes that no errors are made in the implementation of an animal׳s strategy. However, it is inevitable in reality that such errors must sometimes occur. Here we introduce an extension of the classical War of Attrition game which includes the effect of errors in the implementation of an individual׳s strategy. This extension of the classical game has the important feature that the payoff is continuous, and as a consequence admits evolutionary behavior that is fundamentally different from that possible in the original game. We study the evolutionary dynamics of this new game in well-mixed populations both analytically using adaptive dynamics and through individual-based simulations, and show that there are a variety of possible outcomes, including simple monomorphic or dimorphic configurations which are evolutionarily stable and cannot occur in the classical War of Attrition game. In addition, we study the evolutionary dynamics of this extended game in a variety of spatially and socially structured populations, as represented by different complex network topologies, and show that similar outcomes can also occur in these situations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Differential sensitivity to human communication in dogs, wolves, and human infants.
Topál, József; Gergely, György; Erdohegyi, Agnes; Csibra, Gergely; Miklósi, Adám
2009-09-04
Ten-month-old infants persistently search for a hidden object at its initial hiding place even after observing it being hidden at another location. Recent evidence suggests that communicative cues from the experimenter contribute to the emergence of this perseverative search error. We replicated these results with dogs (Canis familiaris), who also commit more search errors in ostensive-communicative (in 75% of the total trials) than in noncommunicative (39%) or nonsocial (17%) hiding contexts. However, comparative investigations suggest that communicative signals serve different functions for dogs and infants, whereas human-reared wolves (Canis lupus) do not show doglike context-dependent differences of search errors. We propose that shared sensitivity to human communicative signals stems from convergent social evolution of the Homo and the Canis genera.
Arneson, Michael R [Chippewa Falls, WI; Bowman, Terrance L [Sumner, WA; Cornett, Frank N [Chippewa Falls, WI; DeRyckere, John F [Eau Claire, WI; Hillert, Brian T [Chippewa Falls, WI; Jenkins, Philip N [Eau Claire, WI; Ma, Nan [Chippewa Falls, WI; Placek, Joseph M [Chippewa Falls, WI; Ruesch, Rodney [Eau Claire, WI; Thorson, Gregory M [Altoona, WI
2007-07-24
The present invention is directed toward a communications channel comprising a link level protocol, a driver, a receiver, and a canceller/equalizer. The link level protocol provides logic for DC-free signal encoding and recovery as well as supporting many features including CRC error detection and message resend to accommodate infrequent bit errors across the medium. The canceller/equalizer provides equalization for destabilized data signals and also provides simultaneous bi-directional data transfer. The receiver provides bit deskewing by removing synchronization error, or skewing, between data signals. The driver provides impedance controlling by monitoring the characteristics of the communications medium, like voltage or temperature, and providing a matching output impedance in the signal driver so that fewer distortions occur while the data travels across the communications medium.
Identification method of laser gyro error model under changing physical field
NASA Astrophysics Data System (ADS)
Wang, Qingqing; Niu, Zhenzhong
2018-04-01
In this paper, the influence mechanism of temperature, temperature changing rate and temperature gradient on the inertial devices is studied. The two-order model of zero bias and the three-order model of the calibration factor of lster gyro under temperature variation are deduced. The calibration scheme of temperature error is designed, and the experiment is carried out. Two methods of stepwise regression analysis and BP neural network are used to identify the parameters of the temperature error model, and the effectiveness of the two methods is proved by the temperature error compensation.
Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas
2011-01-01
Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.
NASA Astrophysics Data System (ADS)
Park, Seong Chan; Shin, Chang Sub
2018-01-01
We propose new mechanisms for small neutrino masses based on clockwork mechanism. The Standard Model neutrinos and lepton number violating operators communicate through the zero mode of clockwork gears, one of the two couplings of the zero mode is exponentially suppressed by clockwork mechanism. Including all known examples for the clockwork realization of the neutrino masses, different types of models are realized depending on the profile and chirality of the zero mode fermion. Each type of realization would have phenomenologically distinctive features with the accompanying heavy neutrinos.
Precise Determination of the Zero-Gravity Surface Figure of a Mirror without Gravity-Sag Modeling
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.; Lam, Jonathan C.; Feria, V. Alfonso; Chang, Zensheu
2007-01-01
The zero-gravity surface figure of optics used in spaceborne astronomical instruments must be known to high accuracy, but earthbound metrology is typically corrupted by gravity sag. Generally, inference of the zero-gravity surface figure from a measurement made under normal gravity requires finite-element analysis (FEA), and for accurate results the mount forces must be well characterized. We describe how to infer the zero-gravity surface figure very precisely using the alternative classical technique of averaging pairs of measurements made with the direction of gravity reversed. We show that mount forces as well as gravity must be reversed between the two measurements and discuss how the St. Venant principle determines when a reversed mount force may be considered to be applied at the same place in the two orientations. Our approach requires no finite-element modeling and no detailed knowledge of mount forces other than the fact that they reverse and are applied at the same point in each orientation. If mount schemes are suitably chosen, zero-gravity optical surfaces may be inferred much more simply and more accurately than with FEA.
The motion near L{sub 4} equilibrium point under non-point mass primaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huda, I. N., E-mail: ibnu.nurul@students.itb.ac.id; Utama, J. A.; Madley, D.
2015-09-30
The Circular Restricted Three-Body Problem (CRTBP) possesses five equilibrium points, that comprise three collinear (L{sub 1}, L{sub 2}, and L{sub 3}) and two triangular points (L{sub 4} and L{sub 5}). The classical study (with the primaries are point mass) suggests that the equilibrium points may cause the velocity of infinitesimal object relatively becomes zero and reveals the zero velocity curve. We study the motion of infinitesimal object near triangular equilibrium point (L{sub 4}) and determine its zero velocity curve. We extend the study by taking into account the effects of radiation of the bigger primary (q{sub 1} ≠ 1, q{submore » 2} = 1) and oblateness of the smaller primary (A{sub 1} = 0, A{sub 2} ≠ 0). The location of L{sub 4} is analytically derived then the stability of L{sub 4} and its zero velocity curves are studied numerically. Our study suggests that the oblateness and the radiation of primaries may affect the stability and zero velocity curve around L{sub 4}.« less
Testing Pattern Hypotheses for Correlation Matrices
ERIC Educational Resources Information Center
McDonald, Roderick P.
1975-01-01
The treatment of covariance matrices given by McDonald (1974) can be readily modified to cover hypotheses prescribing zeros and equalities in the correlation matrix rather than the covariance matrix, still with the convenience of the closed-form Least Squares solution and the classical Newton method. (Author/RC)
Ebola, team communication, and shame: but shame on whom?
Shannon, Sarah E
2015-01-01
Examined as an isolated situation, and through the lens of a rare and feared disease, Mr. Duncan's case seems ripe for second-guessing the physicians and nurses who cared for him. But viewed from the perspective of what we know about errors and team communication, his case is all too common. Nearly 440,000 patient deaths in the U.S. each year may be attributable to medical errors. Breakdowns in communication among health care teams contribute in the majority of these errors. The culture of health care does not seem to foster functional, effective communication between and among professionals. Why? And more importantly, why do we not do something about it?
Context-sensitivity of the feedback-related negativity for zero-value feedback outcomes.
Pfabigan, Daniela M; Seidel, Eva-Maria; Paul, Katharina; Grahl, Arvina; Sailer, Uta; Lanzenberger, Rupert; Windischberger, Christian; Lamm, Claus
2015-01-01
The present study investigated whether the same visual stimulus indicating zero-value feedback (€0) elicits feedback-related negativity (FRN) variation, depending on whether the outcomes correspond with expectations or not. Thirty-one volunteers performed a monetary incentive delay (MID) task while EEG was recorded. FRN amplitudes were comparable and more negative when zero-value outcome deviated from expectations than with expected gain or loss, supporting theories emphasising the impact of unexpectedness and salience on FRN amplitudes. Surprisingly, expected zero-value outcomes elicited the most negative FRNs. However, source localisation showed that such outcomes evoked less activation in cingulate areas than unexpected zero-value outcomes. Our study illustrates the context dependency of identical zero-value feedback stimuli. Moreover, the results indicate that the incentive cues in the MID task evoke different reward prediction error signals. These prediction signals differ in FRN amplitude and neuronal sources, and have to be considered in the design and interpretation of future studies. Copyright © 2014 Elsevier B.V. All rights reserved.
Interprofessional communication and medical error: a reframing of research questions and approaches.
Varpio, Lara; Hall, Pippa; Lingard, Lorelei; Schryer, Catherine F
2008-10-01
Progress toward understanding the links between interprofessional communication and issues of medical error has been slow. Recent research proposes that this delay may result from overlooking the complexities involved in interprofessional care. Medical education initiatives in this domain tend to simplify the complexities of team membership fluidity, rotation, and use of communication tools. A new theoretically informed research approach is required to take into account these complexities. To generate such an approach, we review two theories from the social sciences: Activity Theory and Knotworking. Using these perspectives, we propose that research into interprofessional communication and medical error can develop better understandings of (1) how and why medical errors are generated and (2) how and why gaps in team defenses occur. Such complexities will have to be investigated if students and practicing clinicians are to be adequately prepared to work safely in interprofessional teams.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhou, Shihong; Ma, Jing; Tan, Liying; Shen, Tao
2013-08-01
CMOS is a good candidate tracking detector for satellite optical communications systems with outstanding feature of sub-window for the development of APS (Active Pixel Sensor) technology. For inter-satellite optical communications it is critical to estimate the direction of incident laser beam precisely by measuring the centroid position of incident beam spot. The presence of detector noise results in measurement error, which degrades the tracking performance of systems. In this research, the measurement error of CMOS is derived taking consideration of detector noise. It is shown that the measurement error depends on pixel noise, size of the tracking sub-window (pixels number), intensity of incident laser beam, relative size of beam spot. The influences of these factors are analyzed by numerical simulation. We hope the results obtained in this research will be helpful in the design of CMOS detector satellite optical communications systems.
Locking classical correlations in quantum States.
DiVincenzo, David P; Horodecki, Michał; Leung, Debbie W; Smolin, John A; Terhal, Barbara M
2004-02-13
We show that there exist bipartite quantum states which contain a large locked classical correlation that is unlocked by a disproportionately small amount of classical communication. In particular, there are (2n+1)-qubit states for which a one-bit message doubles the optimal classical mutual information between measurement results on the subsystems, from n/2 bits to n bits. This phenomenon is impossible classically. However, states exhibiting this behavior need not be entangled. We study the range of states exhibiting this phenomenon and bound its magnitude.
A framework for software fault tolerance in real-time systems
NASA Technical Reports Server (NTRS)
Anderson, T.; Knight, J. C.
1983-01-01
A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.
NASA Astrophysics Data System (ADS)
Wan, S.; He, W.
2016-12-01
The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Schoenberg, Mike R; Osborn, Katie E; Mahone, E Mark; Feigon, Maia; Roth, Robert M; Pliskin, Neil H
2017-11-08
Errors in communication are a leading cause of medical errors. A potential source of error in communicating neuropsychological results is confusion in the qualitative descriptors used to describe standardized neuropsychological data. This study sought to evaluate the extent to which medical consumers of neuropsychological assessments believed that results/findings were not clearly communicated. In addition, preference data for a variety of qualitative descriptors commonly used to communicate normative neuropsychological test scores were obtained. Preference data were obtained for five qualitative descriptor systems as part of a larger 36-item internet-based survey of physician satisfaction with neuropsychological services. A new qualitative descriptor system termed the Simplified Qualitative Classification System (Q-Simple) was proposed to reduce the potential for communication errors using seven terms: very superior, superior, high average, average, low average, borderline, and abnormal/impaired. A non-random convenience sample of 605 clinicians identified from four United States academic medical centers from January 1, 2015 through January 7, 2016 were invited to participate. A total of 182 surveys were completed. A minority of clinicians (12.5%) indicated that neuropsychological study results were not clearly communicated. When communicating neuropsychological standardized scores, the two most preferred qualitative descriptor systems were by Heaton and colleagues (26%) and a newly proposed Q-simple system (22%). Comprehensive norms for an extended Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa, TX: Psychological Assessment Resources) (26%) and the newly proposed Q-Simple system (22%). Initial findings highlight the need to improve and standardize communication of neuropsychological results. These data offer initial guidance for preferred terms to communicate test results and form a foundation for more standardized practice among neuropsychologists. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Information transmission in microbial and fungal communication: from classical to quantum.
Majumdar, Sarangam; Pal, Sukla
2018-06-01
Microbes have their own communication systems. Secretion and reception of chemical signaling molecules and ion-channels mediated electrical signaling mechanism are yet observed two special ways of information transmission in microbial community. In this article, we address the aspects of various crucial machineries which set the backbone of microbial cell-to-cell communication process such as quorum sensing mechanism (bacterial and fungal), quorum sensing regulated biofilm formation, gene expression, virulence, swarming, quorum quenching, role of noise in quorum sensing, mathematical models (therapy model, evolutionary model, molecular mechanism model and many more), synthetic bacterial communication, bacterial ion-channels, bacterial nanowires and electrical communication. In particular, we highlight bacterial collective behavior with classical and quantum mechanical approaches (including quantum information). Moreover, we shed a new light to introduce the concept of quantum synthetic biology and possible cellular quantum Turing test.
A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors
Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun
2015-01-01
This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086
Model reference adaptive control of flexible robots in the presence of sudden load changes
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory
1991-01-01
Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.
Towards secure quantum key distribution protocol for wireless LANs: a hybrid approach
NASA Astrophysics Data System (ADS)
Naik, R. Lalu; Reddy, P. Chenna
2015-12-01
The primary goals of security such as authentication, confidentiality, integrity and non-repudiation in communication networks can be achieved with secure key distribution. Quantum mechanisms are highly secure means of distributing secret keys as they are unconditionally secure. Quantum key distribution protocols can effectively prevent various attacks in the quantum channel, while classical cryptography is efficient in authentication and verification of secret keys. By combining both quantum cryptography and classical cryptography, security of communications over networks can be leveraged. Hwang, Lee and Li exploited the merits of both cryptographic paradigms for provably secure communications to prevent replay, man-in-the-middle, and passive attacks. In this paper, we propose a new scheme with the combination of quantum cryptography and classical cryptography for 802.11i wireless LANs. Since quantum cryptography is premature in wireless networks, our work is a significant step forward toward securing communications in wireless networks. Our scheme is known as hybrid quantum key distribution protocol. Our analytical results revealed that the proposed scheme is provably secure for wireless networks.
Clean Quantum and Classical Communication Protocols.
Buhrman, Harry; Christandl, Matthias; Perry, Christopher; Zuiddam, Jeroen
2016-12-02
By how much must the communication complexity of a function increase if we demand that the parties not only correctly compute the function but also return all registers (other than the one containing the answer) to their initial states at the end of the communication protocol? Protocols that achieve this are referred to as clean and the associated cost as the clean communication complexity. Here we present clean protocols for calculating the inner product of two n-bit strings, showing that (in the absence of preshared entanglement) at most n+3 qubits or n+O(sqrt[n]) bits of communication are required. The quantum protocol provides inspiration for obtaining the optimal method to implement distributed cnot gates in parallel while minimizing the amount of quantum communication. For more general functions, we show that nearly all Boolean functions require close to 2n bits of classical communication to compute and close to n qubits if the parties have access to preshared entanglement. Both of these values are maximal for their respective paradigms.
NASA Astrophysics Data System (ADS)
Korotey, E. V.; Sinyavskii, N. Ya.
2007-07-01
A new method for determination of rheological parameters of liquid crystals with zero anisotropy of diamagnetic susceptibility is proposed, which is based on the measurement of the quadrupole splitting line of the NMR 2H spectrum. The method provides higher information content of the experiments, with the shear flow discarded from consideration, compared to that obtained by the classical Leslie-Ericksen theory. A comparison with the experiment is performed, the coefficients of anisotropic viscosity of lecithin/D2O/cyclohexane are determined, and a conclusion is drawn as concerns the domain shapes.
Color-motion feature-binding errors are mediated by a higher-order chromatic representation
Shevell, Steven K.; Wang, Wei
2017-01-01
Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature 429, 262 (2004)]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A 31, A60 (2014)]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at every s level. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higherorder chromatic mechanism. PMID:26974945
Classical and quantum filaments in the ground state of trapped dipolar Bose gases
NASA Astrophysics Data System (ADS)
Cinti, Fabio; Boninsegni, Massimo
2017-07-01
We study, by quantum Monte Carlo simulations, the ground state of a harmonically confined dipolar Bose gas with aligned dipole moments and with the inclusion of a repulsive two-body potential of varying range. Two different limits can clearly be identified, namely, a classical one in which the attractive part of the dipolar interaction dominates and the system forms an ordered array of parallel filaments and a quantum-mechanical one, wherein filaments are destabilized by zero-point motion, and eventually the ground state becomes a uniform cloud. The physical character of the system smoothly evolves from classical to quantum mechanical as the range of the repulsive two-body potential increases. An intermediate regime is observed in which ordered filaments are still present, albeit forming different structures from the ones predicted classically; quantum-mechanical exchanges of indistinguishable particles across different filaments allow phase coherence to be established, underlying a global superfluid response.
Analysis of synchronous digital-modulation schemes for satellite communication
NASA Technical Reports Server (NTRS)
Takhar, G. S.; Gupta, S. C.
1975-01-01
The multipath communication channel for space communications is modeled as a multiplicative channel. This paper discusses the effects of multiplicative channel processes on the symbol error rate for quadrature modulation (QM) digital modulation schemes. An expression for the upper bound on the probability of error is derived and numerically evaluated. The results are compared with those obtained for additive channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimbalkar, Sachin U.; Wenning, Thomas J.; Guo, Wei
In the United States, manufacturing facilities account for about 32% of total domestic energy consumption in 2014. Robust energy tracking methodologies are critical to understanding energy performance in manufacturing facilities. Due to its simplicity and intuitiveness, the classic energy intensity method (i.e. the ratio of total energy use over total production) is the most widely adopted. However, the classic energy intensity method does not take into account the variation of other relevant parameters (i.e. product type, feed stock type, weather, etc.). Furthermore, the energy intensity method assumes that the facilities’ base energy consumption (energy use at zero production) is zero,more » which rarely holds true. Therefore, it is commonly recommended to utilize regression models rather than the energy intensity approach for tracking improvements at the facility level. Unfortunately, many energy managers have difficulties understanding why regression models are statistically better than utilizing the classic energy intensity method. While anecdotes and qualitative information may convince some, many have major reservations about the accuracy of regression models and whether it is worth the time and effort to gather data and build quality regression models. This paper will explain why regression models are theoretically and quantitatively more accurate for tracking energy performance improvements. Based on the analysis of data from 114 manufacturing plants over 12 years, this paper will present quantitative results on the importance of utilizing regression models over the energy intensity methodology. This paper will also document scenarios where regression models do not have significant relevance over the energy intensity method.« less
Exploring the importance of quantum effects in nucleation: The archetypical Nen case
NASA Astrophysics Data System (ADS)
Unn-Toc, Wesley; Halberstadt, Nadine; Meier, Christoph; Mella, Massimo
2012-07-01
The effect of quantum mechanics (QM) on the details of the nucleation process is explored employing Ne clusters as test cases due to their semi-quantal nature. In particular, we investigate the impact of quantum mechanics on both condensation and dissociation rates in the framework of the microcanonical ensemble. Using both classical trajectories and two semi-quantal approaches (zero point averaged dynamics, ZPAD, and Gaussian-based time dependent Hartree, G-TDH) to model cluster and collision dynamics, we simulate the dissociation and monomer capture for Ne8 as a function of the cluster internal energy, impact parameter and collision speed. The results for the capture probability Ps(b) as a function of the impact parameter suggest that classical trajectories always underestimate capture probabilities with respect to ZPAD, albeit at most by 15%-20% in the cases we studied. They also do so in some important situations when using G-TDH. More interestingly, dissociation rates kdiss are grossly overestimated by classical mechanics, at least by one order of magnitude. We interpret both behaviours as mainly due to the reduced amount of kinetic energy available to a quantum cluster for a chosen total internal energy. We also find that the decrease in monomer dissociation energy due to zero point energy effects plays a key role in defining dissociation rates. In fact, semi-quantal and classical results for kdiss seem to follow a common "corresponding states" behaviour when the proper definition of internal and dissociation energies are used in a transition state model estimation of the evaporation rate constants.
Systematic Error Study for ALICE charged-jet v2 Measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, M.; Soltz, R.
We study the treatment of systematic errors in the determination of v 2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ 2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ 2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methodsmore » are equivalent.« less
Determination of time zero from a charged particle detector
Green, Jesse Andrew [Los Alamos, NM
2011-03-15
A method, system and computer program is used to determine a linear track having a good fit to a most likely or expected path of charged particle passing through a charged particle detector having a plurality of drift cells. Hit signals from the charged particle detector are associated with a particular charged particle track. An initial estimate of time zero is made from these hit signals and linear tracks are then fit to drift radii for each particular time-zero estimate. The linear track having the best fit is then searched and selected and errors in fit and tracking parameters computed. The use of large and expensive fast detectors needed to time zero in the charged particle detectors can be avoided by adopting this method and system.
Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G
2010-09-14
Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).
Some Properties of Estimated Scale Invariant Covariance Structures.
ERIC Educational Resources Information Center
Dijkstra, T. K.
1990-01-01
An example of scale invariance is provided via the LISREL model that is subject only to classical normalizations and zero constraints on the parameters. Scale invariance implies that the estimated covariance matrix must satisfy certain equations, and the nature of these equations depends on the fitting function used. (TJH)
Experimental realization of the analogy of quantum dense coding in classical optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhenwei; Sun, Yifan; Li, Pengyun
2016-06-15
We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less
Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication
NASA Astrophysics Data System (ADS)
Niaz, M. T.; Imdad, F.; Kim, H. S.
2017-01-01
An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
Single-shot secure quantum network coding on butterfly network with free public communication
NASA Astrophysics Data System (ADS)
Owari, Masaki; Kato, Go; Hayashi, Masahito
2018-01-01
Quantum network coding on the butterfly network has been studied as a typical example of quantum multiple cast network. We propose a secure quantum network code for the butterfly network with free public classical communication in the multiple unicast setting under restricted eavesdropper’s power. This protocol certainly transmits quantum states when there is no attack. We also show the secrecy with shared randomness as additional resource when the eavesdropper wiretaps one of the channels in the butterfly network and also derives the information sending through public classical communication. Our protocol does not require verification process, which ensures single-shot security.
Ideas of Flat and Curved Space in History of Physics
NASA Astrophysics Data System (ADS)
Berezin, Alexander A.
2006-04-01
Since ``everything which is not prohibited is compulsory'' (assigned to Gell-Mann) we can postulate infinite flat Cartesian N-dimensional (N: any integer) space-time (ST) as embedding for any curved ST. Ergodicity raises quest of whether total number of inflationary and/or Everett bubbles (mini-verses) is finite, countably infinite (aleph-zero) or uncountably infinite (aleph-one). Are these bubbles form Gaussian distribution or form some non-random subsetting? Perhaps, communication between mini-verses (idea of D.Deutsch) can be facilitated by a kind of minimax non-local dynamics akin to Fermat principle? (Minimax Principle in Bubble Cosmology). Even such classical effects as magnetism and polarization have some non-local features. Can we go below the Planck length to perhaps Compton wavelength of our ``Hubble's bubble'' (h/Mc = 10 to minus 95 m, if M = 10 to 54 kg)? When talking about time loops and ergodicity (eternal return paradigm) is there some hysterisis in the way quantum states are accessed in ``forward'' or ``reverse'' direction? (reverse direction implies backward causality of J.Wheeler and/or Aristotelian final causation).
Smith, Kenneth J; Handler, Steven M; Kapoor, Wishwa N; Martich, G Daniel; Reddy, Vivek K; Clark, Sunday
2016-07-01
This study sought to determine the effects of automated primary care physician (PCP) communication and patient safety tools, including computerized discharge medication reconciliation, on discharge medication errors and posthospitalization patient outcomes, using a pre-post quasi-experimental study design, in hospitalized medical patients with ≥2 comorbidities and ≥5 chronic medications, at a single center. The primary outcome was discharge medication errors, compared before and after rollout of these tools. Secondary outcomes were 30-day rehospitalization, emergency department visit, and PCP follow-up visit rates. This study found that discharge medication errors were lower post intervention (odds ratio = 0.57; 95% confidence interval = 0.44-0.74; P < .001). Clinically important errors, with the potential for serious or life-threatening harm, and 30-day patient outcomes were not significantly different between study periods. Thus, automated health system-based communication and patient safety tools, including computerized discharge medication reconciliation, decreased hospital discharge medication errors in medically complex patients. © The Author(s) 2015.
Quadriphase DS-CDMA wireless communication systems employing the generalized detector
NASA Astrophysics Data System (ADS)
Tuzlukov, Vyacheslav
2012-05-01
Probability of bit-error Per performance of asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless communication systems employing the generalized detector (GD) constructed based on the generalized approach to signal processing in noise is analyzed. The effects of pulse shaping, quadriphase or direct sequence quadriphase shift keying (DS-QPSK) spreading, aperiodic spreading sequences are considered in DS-CDMA based on GD and compared with the coherent Neyman-Pearson receiver. An exact Per expression and several approximations: one using the characterristic function method, a simplified expression for the improved Gaussian approximation (IGA) and the simplified improved Gaussian approximation are derived. Under conditions typically satisfied in practice and even with a small number of interferers, the standard Gaussian approximation (SGA) for the multiple-access interference component of the GD statistic and Per performance is shown to be accurate. Moreover, the IGA is shown to reduce to the SGA for pulses with zero excess bandwidth. Second, the GD Per performance of quadriphase DS-CDMA is shown to be superior to that of bi-phase DS-CDMA. Numerical examples by Monte Carlo simulation are presented to illustrate the GD Per performance for square-root raised-cosine pulses and spreading factors of moderate to large values. Also, a superiority of GD employment in CDMA systems over the Neyman-Pearson receiver is demonstrated
Quantum error-correcting code for ternary logic
NASA Astrophysics Data System (ADS)
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
Deterministic quantum teleportation of photonic quantum bits by a hybrid technique.
Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; van Loock, Peter; Furusawa, Akira
2013-08-15
Quantum teleportation allows for the transfer of arbitrary unknown quantum states from a sender to a spatially distant receiver, provided that the two parties share an entangled state and can communicate classically. It is the essence of many sophisticated protocols for quantum communication and computation. Photons are an optimal choice for carrying information in the form of 'flying qubits', but the teleportation of photonic quantum bits (qubits) has been limited by experimental inefficiencies and restrictions. Main disadvantages include the fundamentally probabilistic nature of linear-optics Bell measurements, as well as the need either to destroy the teleported qubit or attenuate the input qubit when the detectors do not resolve photon numbers. Here we experimentally realize fully deterministic quantum teleportation of photonic qubits without post-selection. The key step is to make use of a hybrid technique involving continuous-variable teleportation of a discrete-variable, photonic qubit. When the receiver's feedforward gain is optimally tuned, the continuous-variable teleporter acts as a pure loss channel, and the input dual-rail-encoded qubit, based on a single photon, represents a quantum error detection code against photon loss and hence remains completely intact for most teleportation events. This allows for a faithful qubit transfer even with imperfect continuous-variable entangled states: for four qubits the overall transfer fidelities range from 0.79 to 0.82 and all of them exceed the classical limit of teleportation. Furthermore, even for a relatively low level of the entanglement, qubits are teleported much more efficiently than in previous experiments, albeit post-selectively (taking into account only the qubit subspaces), and with a fidelity comparable to the previously reported values.
Asymptotics of quantum weighted Hurwitz numbers
NASA Astrophysics Data System (ADS)
Harnad, J.; Ortmann, Janosch
2018-06-01
This work concerns both the semiclassical and zero temperature asymptotics of quantum weighted double Hurwitz numbers. The partition function for quantum weighted double Hurwitz numbers can be interpreted in terms of the energy distribution of a quantum Bose gas with vanishing fugacity. We compute the leading semiclassical term of the partition function for three versions of the quantum weighted Hurwitz numbers, as well as lower order semiclassical corrections. The classical limit is shown to reproduce the simple single and double Hurwitz numbers studied by Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74). The KP-Toda τ-function that serves as generating function for the quantum Hurwitz numbers is shown to have the τ-function of Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74) as its leading term in the classical limit, and, with suitable scaling, the same holds for the partition function, the weights and expectations of Hurwitz numbers. We also compute the zero temperature limit of the partition function and quantum weighted Hurwitz numbers. The KP or Toda τ-function serving as generating function for the quantum Hurwitz numbers are shown to give the one for Belyi curves in the zero temperature limit and, with suitable scaling, the same holds true for the partition function, the weights and the expectations of Hurwitz numbers.
High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link
NASA Technical Reports Server (NTRS)
Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli
2016-01-01
We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.
NASA Technical Reports Server (NTRS)
Vess, Melissa F.; Starin, Scott R.
2007-01-01
During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.
A Framework for Bounding Nonlocality of State Discrimination
NASA Astrophysics Data System (ADS)
Childs, Andrew M.; Leung, Debbie; Mančinska, Laura; Ozols, Maris
2013-11-01
We consider the class of protocols that can be implemented by local quantum operations and classical communication (LOCC) between two parties. In particular, we focus on the task of discriminating a known set of quantum states by LOCC. Building on the work in the paper Quantum nonlocality without entanglement (Bennett et al., Phys Rev A 59:1070-1091, 1999), we provide a framework for bounding the amount of nonlocality in a given set of bipartite quantum states in terms of a lower bound on the probability of error in any LOCC discrimination protocol. We apply our framework to an orthonormal product basis known as the domino states and obtain an alternative and simplified proof that quantifies its nonlocality. We generalize this result for similar bases in larger dimensions, as well as the “rotated” domino states, resolving a long-standing open question (Bennett et al., Phys Rev A 59:1070-1091, 1999).
Network of time-multiplexed optical parametric oscillators as a coherent Ising machine
NASA Astrophysics Data System (ADS)
Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa
2014-12-01
Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Radio propagation through solar and other extraterrestrial ionized media
NASA Technical Reports Server (NTRS)
Smith, E. K.; Edelson, R. E.
1980-01-01
The present S- and X-band communications needs in deep space are addressed to illustrate the aspects which are affected by propagation through extraterrestrial plasmas. The magnitude, critical threshold, and frequency dependence of some eight propagation effects for an S-band propagation path passing within 4 solar radii of the Sun are described. The theory and observation of propagation in extraterrestrial plasmas are discussed and the various plasma states along a near solar propagation path are illustrated. Classical magnetoionic theory (cold anisotropic plasma) is examined for its applicability to the path in question. The characteristics of the plasma states found along the path are summarized and the errors in some of the standard approximations are indicated. Models of extraterrestrial plasmas are included. Modeling the electron density in the solar corona and solar wind, is emphasized but some cursory information on the terrestrial planets plus Jupiters is included.
Consistency and Change: The (R)Evolution of the Basic Communication Course
ERIC Educational Resources Information Center
Valenzano, Joseph M., III; Wallace, Samuel P.; Morreale, Sherwyn P.
2014-01-01
The basic communication course, with its roots in classical Greece and Rome, is frequently a required course in general education. The course often serves as our "front porch," welcoming new students to the Communication discipline. This essay first outlines early traditions in oral communication instruction and their influence on future…
Professional liability insurance and medical error disclosure.
McLennan, Stuart; Shaw, David; Leu, Agnes; Elger, Bernice
2015-01-01
To examine medicolegal stakeholders' views about the impact of professional liability insurance in Switzerland on medical error disclosure. Purposive sample of 23 key medicolegal stakeholders in Switzerland from a range of fields between October 2012 and February 2013. Data were collected via individual, face-to-face interviews using a researcher-developed semi-structured interview guide. Interviews were transcribed and analysed using conventional content analysis. Participants, particularly those with a legal or quality background, reported that concerns relating to professional liability insurance often inhibited communication with patients after a medical error. Healthcare providers were reported to be particularly concerned about losing their liability insurance cover for apologising to harmed patients. It was reported that the attempt to limit the exchange of information and communication could lead to a conflict with patient rights law. Participants reported that hospitals could, and in some case are, moving towards self-insurance approaches, which could increase flexibility regarding error communication The reported current practice of at least some liability insurance companies in Switzerland of inhibiting communication with harmed patients after an error is concerning and requires further investigation. With a new ethic of transparency regarding medical errors now prevailing internationally, this approach is increasingly being perceived to be misguided. A move away from hospitals relying solely on liability insurance may allow greater transparency after errors. Legalisation preventing the loss of liability insurance coverage for apologising to harmed patients should also be considered.
Driven topological systems in the classical limit
NASA Astrophysics Data System (ADS)
Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel
2017-03-01
Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.
EPR Steering inequalities with Communication Assistance
Nagy, Sándor; Vértesi, Tamás
2016-01-01
In this paper, we investigate the communication cost of reproducing Einstein-Podolsky-Rosen (EPR) steering correlations arising from bipartite quantum systems. We characterize the set of bipartite quantum states which admits a local hidden state model augmented with c bits of classical communication from an untrusted party (Alice) to a trusted party (Bob). In case of one bit of information (c = 1), we show that this set has a nontrivial intersection with the sets admitting a local hidden state and a local hidden variables model for projective measurements. On the other hand, we find that an infinite amount of classical communication is required from an untrusted Alice to a trusted Bob to simulate the EPR steering correlations produced by a two-qubit maximally entangled state. It is conjectured that a state-of-the-art quantum experiment would be able to falsify two bits of communication this way. PMID:26880376
2015-01-01
on AFRL’s small unmanned aerial vehicle (UAV) test bed . 15. SUBJECT TERMS Zero-Knowledge Proof Protocol Testing 16. SECURITY CLASSIFICATION OF...VERIFIER*** edition Version Information: Version 1.1.3 Version Details: Successful ZK authentication between two networked machines. Fixed a bug ...that causes intermittent bignum errors. Fixed a network hang bug and now allows continually authentication at the Verifier. Also now removing
Does the A-not-B error in adult pet dogs indicate sensitivity to human communication?
Kis, Anna; Topál, József; Gácsi, Márta; Range, Friederike; Huber, Ludwig; Miklósi, Adám; Virányi, Zsófia
2012-07-01
Recent dog-infant comparisons have indicated that the experimenter's communicative signals in object hide-and-search tasks increase the probability of perseverative (A-not-B) errors in both species (Topál et al. 2009). These behaviourally similar results, however, might reflect different mechanisms in dogs and in children. Similar errors may occur if the motor response of retrieving the object during the A trials cannot be inhibited in the B trials or if the experimenter's movements and signals toward the A hiding place in the B trials ('sham-baiting') distract the dogs' attention. In order to test these hypotheses, we tested dogs similarly to Topál et al. (2009) but eliminated the motor search in the A trials and 'sham-baiting' in the B trials. We found that neither an inability to inhibit previously rewarded motor response nor insufficiencies in their working memory and/or attention skills can explain dogs' erroneous choices. Further, we replicated the finding that dogs have a strong tendency to commit the A-not-B error after ostensive-communicative hiding and demonstrated the crucial effect of socio-communicative cues as the A-not-B error diminishes when location B is ostensively enhanced. These findings further support the hypothesis that the dogs' A-not-B error may reflect a special sensitivity to human communicative cues. Such object-hiding and search tasks provide a typical case for how susceptibility to human social signals could (mis)lead domestic dogs.
Lee, Norman; Ward, Jessica L; Vélez, Alejandro; Micheyl, Christophe; Bee, Mark A
2017-03-06
Noise is a ubiquitous source of errors in all forms of communication [1]. Noise-induced errors in speech communication, for example, make it difficult for humans to converse in noisy social settings, a challenge aptly named the "cocktail party problem" [2]. Many nonhuman animals also communicate acoustically in noisy social groups and thus face biologically analogous problems [3]. However, we know little about how the perceptual systems of receivers are evolutionarily adapted to avoid the costs of noise-induced errors in communication. In this study of Cope's gray treefrog (Hyla chrysoscelis; Hylidae), we investigated whether receivers exploit a potential statistical regularity present in noisy acoustic scenes to reduce errors in signal recognition and discrimination. We developed an anatomical/physiological model of the peripheral auditory system to show that temporal correlation in amplitude fluctuations across the frequency spectrum ("comodulation") [4-6] is a feature of the noise generated by large breeding choruses of sexually advertising males. In four psychophysical experiments, we investigated whether females exploit comodulation in background noise to mitigate noise-induced errors in evolutionarily critical mate-choice decisions. Subjects experienced fewer errors in recognizing conspecific calls and in selecting the calls of high-quality mates in the presence of simulated chorus noise that was comodulated. These data show unequivocally, and for the first time, that exploiting statistical regularities present in noisy acoustic scenes is an important biological strategy for solving cocktail-party-like problems in nonhuman animal communication. Copyright © 2017 Elsevier Ltd. All rights reserved.
Long distance quantum communication using quantum error correction
NASA Technical Reports Server (NTRS)
Gingrich, R. M.; Lee, H.; Dowling, J. P.
2004-01-01
We describe a quantum error correction scheme that can increase the effective absorption length of the communication channel. This device can play the role of a quantum transponder when placed in series, or a cyclic quantum memory when inserted in an optical loop.
Solar Tracking Error Analysis of Fresnel Reflector
Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie
2014-01-01
Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664
Forward scattering in two-beam laser interferometry
NASA Astrophysics Data System (ADS)
Mana, G.; Massa, E.; Sasso, C. P.
2018-04-01
A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal; Lewenstein, Maciej; Institut fuer Theoretische Physik, Universitaet Hannover, D-30167 Hannover
We solve the problem of the optimal cloning of pure entangled two-qubit states with a fixed degree of entanglement using local operations and classical communication. We show that, amazingly, classical communication between the parties can improve the fidelity of local cloning if and only if the initial entanglement is higher than a certain critical value. It is completely useless for weakly entangled states. We also show that bound entangled states with positive partial transpose are not useful as a resource to improve the best local cloning fidelity.
Asymmetric information capacities of reciprocal pairs of quantum channels
NASA Astrophysics Data System (ADS)
Rosati, Matteo; Giovannetti, Vittorio
2018-05-01
Reciprocal pairs of quantum channels are defined as completely positive transformations which admit a rigid, distance-preserving, yet not completely positive transformation that allows one to reproduce the outcome of one from the corresponding outcome of the other. From a classical perspective these transmission lines should exhibit the same communication efficiency. This is no longer the case in the quantum setting: explicit asymmetric behaviors are reported studying the classical communication capacities of reciprocal pairs of depolarizing and Weyl-covariant channels.
Practical cryptographic strategies in the post-quantum era
NASA Astrophysics Data System (ADS)
Kabanov, I. S.; Yunusov, R. R.; Kurochkin, Y. V.; Fedorov, A. K.
2018-02-01
Quantum key distribution technologies promise information-theoretic security and are currently being deployed in com-mercial applications. We review new frontiers in information security technologies in communications and distributed storage applications with the use of classical, quantum, hybrid classical-quantum, and post-quantum cryptography. We analyze the cur-rent state-of-the-art, critical characteristics, development trends, and limitations of these techniques for application in enterprise information protection systems. An approach concerning the selection of practical encryption technologies for enterprises with branched communication networks is discussed.
Direct and reverse secret-key capacities of a quantum channel.
Pirandola, Stefano; García-Patrón, Raul; Braunstein, Samuel L; Lloyd, Seth
2009-02-06
We define the direct and reverse secret-key capacities of a memoryless quantum channel as the optimal rates that entanglement-based quantum-key-distribution protocols can reach by using a single forward classical communication (direct reconciliation) or a single feedback classical communication (reverse reconciliation). In particular, the reverse secret-key capacity can be positive for antidegradable channels, where no forward strategy is known to be secure. This property is explicitly shown in the continuous variable framework by considering arbitrary one-mode Gaussian channels.
Fractional order implementation of Integral Resonant Control - A nanopositioning application.
San-Millan, Andres; Feliu-Batlle, Vicente; Aphale, Sumeet S
2017-10-04
By exploiting the co-located sensor-actuator arrangement in typical flexure-based piezoelectric stack actuated nanopositioners, the polezero interlacing exhibited by their axial frequency response can be transformed to a zero-pole interlacing by adding a constant feed-through term. The Integral Resonant Control (IRC) utilizes this unique property to add substantial damping to the dominant resonant mode by the use of a simple integrator implemented in closed loop. IRC used in conjunction with an integral tracking scheme, effectively reduces positioning errors introduced by modelling inaccuracies or parameter uncertainties. Over the past few years, successful application of the IRC control technique to nanopositioning systems has demonstrated performance robustness, easy tunability and versatility. The main drawback has been the relatively small positioning bandwidth achievable. This paper proposes a fractional order implementation of the classical integral tracking scheme employed in tandem with the IRC scheme to deliver damping and tracking. The fractional order integrator introduces an additional design parameter which allows desired pole-placement, resulting in superior closed loop bandwidth. Simulations and experimental results are presented to validate the theory. A 250% improvement in the achievable positioning bandwidth is observed with proposed fractional order scheme. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Mukherjee, Sudip; Rajak, Atanu; Chakrabarti, Bikas K.
2018-02-01
We explore the behavior of the order parameter distribution of the quantum Sherrington-Kirkpatrick model in the spin glass phase using Monte Carlo technique for the effective Suzuki-Trotter Hamiltonian at finite temperatures and that at zero temperature obtained using the exact diagonalization method. Our numerical results indicate the existence of a low- but finite-temperature quantum-fluctuation-dominated ergodic region along with the classical fluctuation-dominated high-temperature nonergodic region in the spin glass phase of the model. In the ergodic region, the order parameter distribution gets narrower around the most probable value of the order parameter as the system size increases. In the other region, the Parisi order distribution function has nonvanishing value everywhere in the thermodynamic limit, indicating nonergodicity. We also show that the average annealing time for convergence (to a low-energy level of the model, within a small error range) becomes system size independent for annealing down through the (quantum-fluctuation-dominated) ergodic region. It becomes strongly system size dependent for annealing through the nonergodic region. Possible finite-size scaling-type behavior for the extent of the ergodic region is also addressed.
Errors in veterinary practice: preliminary lessons for building better veterinary teams.
Kinnison, T; Guile, D; May, S A
2015-11-14
Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.
ERIC Educational Resources Information Center
Rice, Bart F.; Wilde, Carroll O.
It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…
Communications system for zero-g simulation tests in water
NASA Technical Reports Server (NTRS)
Smith, H. E.
1971-01-01
System connects seven observers, diver, and spare station, and utilizes public address system with underwater speakers to provide two-way communications between test subject and personnel in control of life support, so that test personnel are warned immediately of malfunction in pressure suit or equipment.
Fast Clock Recovery for Digital Communications
NASA Technical Reports Server (NTRS)
Tell, R. G.
1985-01-01
Circuit extracts clock signal from random non-return-to-zero data stream, locking onto clock within one bit period at 1-gigabitper-second data rate. Circuit used for synchronization in opticalfiber communications. Derives speed from very short response time of gallium arsenide metal/semiconductor field-effect transistors (MESFET's).
An attack aimed at active phase compensation in one-way phase-encoded QKD systems
NASA Astrophysics Data System (ADS)
Dong, Zhao-Yue; Yu, Ning-Na; Wei, Zheng-Jun; Wang, Jin-Dong; Zhang, Zhi-Ming
2014-08-01
Phase drift is an inherent problem in one-way phase-encoded quantum key distribution (QKD) systems. Although combining passive with active phase compensation (APC) processes can effectively compensate for the phase drift, the security problems brought about by these processes are rarely considered. In this paper, we point out a security hole in the APC process and put forward a corresponding attack scheme. Under our proposed attack, the quantum bit error rate (QBER) of the QKD can be close to zero for some conditions. However, under the same conditions the ratio r of the key "0" and the key "1" which Bob (the legal communicators Alice and Bob) gets is no longer 1:1 but 2:1, which may expose Eve (the eavesdropper). In order to solve this problem, we modify the resend strategy of the attack scheme, which can force r to reach 1 and the QBER to be lower than the tolerable QBER.
Spin qubit transport in a double quantum dot
NASA Astrophysics Data System (ADS)
Zhao, Xinyu; Hu, Xuedong
Long distance spin communication is a crucial ingredient to scalable quantum computer architectures based on electron spin qubits. One way to transfer spin information over a long distance on chip is via electron transport. Here we study the transport of an electron spin qubit in a double quantum dot by tuning the interdot detuning voltage. We identify a parameter regime where spin relaxation hot-spots can be avoided and high-fidelity spin transport is possible. Within this parameter space, the spin transfer fidelity is determined by the operation speed and the applied magnetic field. In particular, near zero detuning, a proper choice of operation speed is essential to high fidelity. In addition, we also investigate the modification of the effective g-factor by the interdot detuning, which could lead to a phase error between spin up and down states. The results presented in this work could be a useful guidance for experimentally achieving high-fidelity spin qubit transport. We thank financial support by US ARO via Grant W911NF1210609.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.
De sitter space and perpetuum mobile
NASA Astrophysics Data System (ADS)
Akhmedov, Emil T.; Buividovich, P. V.; Singleton, Douglas A.
2012-04-01
The general arguments that any interacting nonconformal classical field theory in de Sitter space leads to the possibility of constructing a perpetuum mobile is given. The arguments are based on the observation that massive free falling particles can radiate other massive particles on the classical level as seen by the free falling observer. The intensity of the radiation process is not zero even for particles with any finite mass, i.e., with a wavelength which is within causal domain. Hence, we conclude that either de Sitter space cannot exist eternally or that one can build a perpetuum mobile.
De sitter space and perpetuum mobile
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhmedov, Emil T.; Buividovich, P. V.; Singleton, Douglas A.
2012-04-15
The general arguments that any interacting nonconformal classical field theory in de Sitter space leads to the possibility of constructing a perpetuum mobile is given. The arguments are based on the observation that massive free falling particles can radiate other massive particles on the classical level as seen by the free falling observer. The intensity of the radiation process is not zero even for particles with any finite mass, i.e., with a wavelength which is within causal domain. Hence, we conclude that either de Sitter space cannot exist eternally or that one can build a perpetuum mobile.
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
Results from the DOLCE (Deep Space Optical Link Communications Experiment) project
NASA Astrophysics Data System (ADS)
Baister, Guy; Kudielka, Klaus; Dreischer, Thomas; Tüchler, Michael
2009-02-01
Oerlikon Space AG has since 1995 been developing the OPTEL family of optical communications terminals. The optical terminals within the OPTEL family have been designed so as to be able to position Oerlikon Space for future opportunities open to this technology. These opportunities range from commercial optical satellite crosslinks between geostationary (GEO) satellites, deep space optical links between planetary probes and the Earth, as well as optical links between airborne platforms (either between the airborne platforms or between a platform and GEO satellite). The OPTEL terminal for deep space applications has been designed as an integrated RF-optical terminal for telemetry links between the science probe and Earth. The integrated architecture provides increased TM link capacities through the use of an optical link, while spacecraft navigation and telecommand are ensured by the classical RF link. The optical TM link employs pulsed laser communications operating at 1058nm to transmit data using PPM modulation to achieve a robust link to atmospheric degradation at the optical ground station. For deep space links from Lagrange (L1 / L2) data rates of 10 - 20 Mbps can be achieved for the same spacecraft budgets (mass and power) as an RF high gain antenna. Results of an inter-island test campaign to demonstrate the performance of the pulsed laser communications subsystem employing 32-PPM for links through the atmosphere over a distance of 142 km are presented. The transmitter of the communications subsystem is a master oscillator power amplifier (MOPA) employing a 1 W (average power) amplifier and the receiver a Si APD with a measured sensitivity of -70.9 dBm for 32-PPM modulation format at a user data rate of 10 Mbps and a bit error rate (BER) of 10-6.
Mahfuz, Mohammad Upal; Makrakis, Dimitrios; Mouftah, Hussein T
2016-09-01
Unlike normal diffusion, in anomalous diffusion, the movement of a molecule is described by the correlated random walk model where the mean square displacement of a molecule depends on the power law of time. In molecular communication (MC), there are many scenarios when the propagation of molecules cannot be described by normal diffusion process, where anomalous diffusion is a better fit. In this paper, the effects of anomalous subdiffusion on concentration-encoded molecular communication (CEMC) are investigated. Although classical (i.e., normal) diffusion is a widely-used model of diffusion in molecular communication (MC) research, anomalous subdiffusion is quite common in biological media involving bio-nanomachines, yet inadequately addressed as a research issue so far. Using the fractional diffusion approach, the molecular propagation effects in the case of pure subdiffusion occurring in an unbounded three-dimensional propagation medium have been shown in detail in terms of temporal dispersion parameters of the impulse response of the subdiffusive channel. Correspondingly, the bit error rate (BER) performance of a CEMC system is investigated with sampling-based (SD) and strength (i.e., energy)-based (ED) signal detection methods. It is found that anomalous subdiffusion has distinctive time-dispersive properties that play a vital role in accurately designing a subdiffusive CEMC system. Unlike normal diffusion, to detect information symbols in subdiffusive CEMC, a receiver requires larger memory size to operate correctly and hence a more complex structure. An in-depth analysis has been made on the performances of SD and ED optimum receiver models under diffusion noise and intersymbol interference (ISI) scenarios when communication range, transmission data rate, and memory size vary. In subdiffusive CEMC, the SD method.
Developing a model for the adequate description of electronic communication in hospitals.
Saboor, Samrend; Ammenwerth, Elske
2011-01-01
Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.
Du, Chunhua; Jiang, Chunyan; Zuo, Peng; Huang, Xin; Pu, Xiong; Zhao, Zhenfu; Zhou, Yongli; Li, Linxuan; Chen, Hong; Hu, Weiguo; Wang, Zhong Lin
2015-12-02
Visible light communication (VLC) simultaneously provides illumination and communication via light emitting diodes (LEDs). Keeping a low bit error rate is essential to communication quality, and holding a stable brightness level is pivotal for illumination function. For the first time, a piezo-phototronic effect controlled visible light communication (PVLC) system based on InGaN/GaN multiquantum wells nanopillars is demonstrated, in which the information is coded by mechanical straining. This approach of force coding is also instrumental to avoid LED blinks, which has less impact on illumination and is much safer to eyes than electrical on/off VLC. The two-channel transmission mode of the system here shows great superiority in error self-validation and error self-elimination in comparison to VLC. This two-channel PVLC system provides a suitable way to carry out noncontact, reliable communication under complex circumstances. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Limitation to Communication of Fermionic System in Accelerated Frame
NASA Astrophysics Data System (ADS)
Chang, Jinho; Kwon, Younghun
2015-03-01
In this article, we investigate communication between an inertial observer and an accelerated observer, sharing fermionic system, when they use classical and quantum communication using single rail or dual rail encoding. The purpose of this work is to understand the limit to the communication between an inertial observer and an accelerated observer, with single rail or dual rail encoding of fermionic system. We observe that at the infinite acceleration, the coherent information of single(or double) rail quantum channel vanishes, but those of classical ones may have finite values. In addition, we see that even when considering a method beyond the single-mode approximation, for the communication between Alice and Bob, the dual rail entangled state seems to provide better information transfer than the single rail entangled state, when we take a fixed choice of the Unruh mode. Moreover, we find that the single-mode approximation may not be sufficient to analyze communication of fermionic system in an accelerated frame.
Fade-resistant forward error correction method for free-space optical communications systems
Johnson, Gary W.; Dowla, Farid U.; Ruggiero, Anthony J.
2007-10-02
Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
Purification of Logic-Qubit Entanglement.
Zhou, Lan; Sheng, Yu-Bo
2016-07-05
Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network.
Linear quadratic stochastic control of atomic hydrogen masers.
Koppang, P; Leland, R
1999-01-01
Data are given showing the results of using the linear quadratic Gaussian (LQG) technique to steer remote hydrogen masers to Coordinated Universal Time (UTC) as given by the United States Naval Observatory (USNO) via two-way satellite time transfer and the Global Positioning System (GPS). Data also are shown from the results of steering a hydrogen maser to the real-time USNO mean. A general overview of the theory behind the LQG technique also is given. The LQG control is a technique that uses Kalman filtering to estimate time and frequency errors used as input into a control calculation. A discrete frequency steer is calculated by minimizing a quadratic cost function that is dependent on both the time and frequency errors and the control effort. Different penalties, chosen by the designer, are assessed by the controller as the time and frequency errors and control effort vary from zero. With this feature, controllers can be designed to force the time and frequency differences between two standards to zero, either more or less aggressively depending on the application.
Learning to classify in large committee machines
NASA Astrophysics Data System (ADS)
O'kane, Dominic; Winther, Ole
1994-10-01
The ability of a two-layer neural network to learn a specific non-linearly-separable classification task, the proximity problem, is investigated using a statistical mechanics approach. Both the tree and fully connected architectures are investigated in the limit where the number K of hidden units is large, but still much smaller than the number N of inputs. Both have continuous weights. Within the replica symmetric ansatz, we find that for zero temperature training, the tree architecture exhibits a strong overtraining effect. For nonzero temperature the asymptotic error is lowered, but it is still higher than the corresponding value for the simple perceptron. The fully connected architecture is considered for two regimes. First, for a finite number of examples we find a symmetry among the hidden units as each performs equally well. The asymptotic generalization error is finite, and minimal for T-->∞ where it goes to the same value as for the simple perceptron. For a large number of examples we find a continuous transition to a phase with broken hidden-unit symmetry, which has an asymptotic generalization error equal to zero.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.
1993-01-01
Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.
Quantum effects in amplitude death of coupled anharmonic self-oscillators
NASA Astrophysics Data System (ADS)
Amitai, Ehud; Koppenhöfer, Martin; Lörch, Niels; Bruder, Christoph
2018-05-01
Coupling two or more self-oscillating systems may stabilize their zero-amplitude rest state, therefore quenching their oscillation. This phenomenon is termed "amplitude death." Well known and studied in classical self-oscillators, amplitude death was only recently investigated in quantum self-oscillators [Ishibashi and Kanamoto, Phys. Rev. E 96, 052210 (2017), 10.1103/PhysRevE.96.052210]. Quantitative differences between the classical and quantum descriptions were found. Here, we demonstrate that for quantum self-oscillators with anharmonicity in their energy spectrum, multiple resonances in the mean phonon number can be observed. This is a result of the discrete energy spectrum of these oscillators, and is not present in the corresponding classical model. Experiments can be realized with current technology and would demonstrate these genuine quantum effects in the amplitude death phenomenon.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2016-06-14
A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improving Pathologists' Communication Skills.
Dintzis, Suzanne
2016-08-01
The 2015 Institute of Medicine report on diagnostic error has placed a national spotlight on the importance of improving communication among clinicians and between clinicians and patients [1]. The report emphasizes the critical role that communication plays in patient safety and outlines ways that pathologists can support this process. Despite recognition of communication as an essential element in patient care, pathologists currently undergo limited (if any) formal training in communication skills. To address this gap, we at the University of Washington Medical Center developed communication training with the goal of establishing best practice procedures for effective pathology communication. The course includes lectures, role playing, and simulated clinician-pathologist interactions for training and evaluation of pathology communication performance. Providing communication training can help create reliable communication pathways that anticipate and address potential barriers and errors before they happen. © 2016 American Medical Association. All Rights Reserved.
Central Procurement Workload Projection Model
1981-02-01
generated by the P&P Directorates such as procurement actions (PA’s) are pursued. Specifi- cally, Box-Jenkins Autoregressive Integrated Moving Average...Breakout of PA’s to over and under $10,000 23 IV. FINDINGS AND RECOMMENDATIONS 24 A. General 24 B. Findings 24 C. Recommendations 25...the model will predict the actual values and hence the error will be zero . Therefore, after forecasting 3 quarters into the future no error
Barnhart, Amber; Lausen, Harald; Smith, Tracey; Lopp, Lauri
2010-05-01
The Patient-centered Medical Home (PCMH) relies on comprehensive, consistent, and accessible communication for the patient with all members of their health care team. "E-medicine" and health information technology (HIT) create many new possibilities in addition to standard face-to-face encounters. There is interest by both physicians and patients for enhanced access through electronic communication. However, there is little published literature regarding specific educational programs for medical professionals using electronic communication with patients. Faculty in a required 6-week family medicine clerkship developed, implemented, and evaluated an electronic health communication curriculum. This curriculum consists of a didactic session on electronic health communication including anticipated errors of communication and common clinical pitfalls. Each clerkship student receives a weekly e-mail from a standardized patient centered on a clinical question. Additionally, each e-mail contains a different communication challenge or predicted error. Students receive feedback each week on the e-mails and are evaluated with an objective structured clinical exam (OSCE) during the final week. The results of the weekly e-mails and the final OSCE show that students improve overall but continue to make predicted errors in communication despite didactic instruction and actual practice. These results reinforce the need for medical student education on electronic health communication with patients.
Communication Education and Instructional Communication: Genesis and Evolution as Fields of Inquiry
ERIC Educational Resources Information Center
Morreale, Sherwyn; Backlund, Philip; Sparks, Leyla
2014-01-01
Communication education is concerned with the communicative aspects of teaching and learning in various situations and contexts. Although the historical roots of this area of inquiry date back to the classical study of rhetoric by the Greeks and Romans, this report focuses on the field's emergence as an important area of modern scholarly…
Quantum groups, Yang-Baxter maps and quasi-determinants
NASA Astrophysics Data System (ADS)
Tsuboi, Zengo
2018-01-01
For any quasi-triangular Hopf algebra, there exists the universal R-matrix, which satisfies the Yang-Baxter equation. It is known that the adjoint action of the universal R-matrix on the elements of the tensor square of the algebra constitutes a quantum Yang-Baxter map, which satisfies the set-theoretic Yang-Baxter equation. The map has a zero curvature representation among L-operators defined as images of the universal R-matrix. We find that the zero curvature representation can be solved by the Gauss decomposition of a product of L-operators. Thereby obtained a quasi-determinant expression of the quantum Yang-Baxter map associated with the quantum algebra Uq (gl (n)). Moreover, the map is identified with products of quasi-Plücker coordinates over a matrix composed of the L-operators. We also consider the quasi-classical limit, where the underlying quantum algebra reduces to a Poisson algebra. The quasi-determinant expression of the quantum Yang-Baxter map reduces to ratios of determinants, which give a new expression of a classical Yang-Baxter map.
Indeterminism in Classical Dynamics of Particle Motion
NASA Astrophysics Data System (ADS)
Eyink, Gregory; Vishniac, Ethan; Lalescu, Cristian; Aluie, Hussein; Kanov, Kalin; Burns, Randal; Meneveau, Charles; Szalay, Alex
2013-03-01
We show that ``God plays dice'' not only in quantum mechanics but also in the classical dynamics of particles advected by turbulent fluids. With a fixed deterministic flow velocity and an exactly known initial position, the particle motion is nevertheless completely unpredictable! In analogy with spontaneous magnetization in ferromagnets which persists as external field is taken to zero, the particle trajectories in turbulent flow remain random as external noise vanishes. The necessary ingredient is a rough advecting field with a power-law energy spectrum extending to smaller scales as noise is taken to zero. The physical mechanism of ``spontaneous stochasticity'' is the explosive dispersion of particle pairs proposed by L. F. Richardson in 1926, so the phenomenon should be observable in laboratory and natural turbulent flows. We present here the first empirical corroboration of these effects in high Reynolds-number numerical simulations of hydrodynamic and magnetohydrodynamic fluid turbulence. Since power-law spectra are seen in many other systems in condensed matter, geophysics and astrophysics, the phenomenon should occur rather widely. Fast reconnection in solar flares and other astrophysical systems can be explained by spontaneous stochasticity of magnetic field-line motion
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
Local quantum transformations requiring infinite rounds of classical communication.
Chitambar, Eric
2011-11-04
In this Letter, we investigate the number of measurement and communication rounds needed to implement certain tasks by local quantum operations and classical communication (LOCC), a relatively unexplored topic. To demonstrate the possible strong dependence on the round number, we consider the problem of converting three-qubit entanglement into two-qubit form, specifically in the random distillation setting of [Phys. Rev. Lett. 98, 260501 (2007)]. We find that the number of LOCC rounds needed for a transformation can depend on the amount of entanglement distilled. In fact, for a wide range of transformations, the required number of rounds is infinite (unbounded). This represents the first concrete example of a task needing an infinite number of rounds to implement.
Color-motion feature-binding errors are mediated by a higher-order chromatic representation.
Shevell, Steven K; Wang, Wei
2016-03-01
Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.
Cruikshank, Benjamin; Jacobs, Kurt
2017-07-21
von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.
Reaching Agreement in Quantum Hybrid Networks.
Shi, Guodong; Li, Bo; Miao, Zibo; Dower, Peter M; James, Matthew R
2017-07-20
We consider a basic quantum hybrid network model consisting of a number of nodes each holding a qubit, for which the aim is to drive the network to a consensus in the sense that all qubits reach a common state. Projective measurements are applied serving as control means, and the measurement results are exchanged among the nodes via classical communication channels. In this way the quantum-opeartion/classical-communication nature of hybrid quantum networks is captured, although coherent states and joint operations are not taken into consideration in order to facilitate a clear and explicit analysis. We show how to carry out centralized optimal path planning for this network with all-to-all classical communications, in which case the problem becomes a stochastic optimal control problem with a continuous action space. To overcome the computation and communication obstacles facing the centralized solutions, we also develop a distributed Pairwise Qubit Projection (PQP) algorithm, where pairs of nodes meet at a given time and respectively perform measurements at their geometric average. We show that the qubit states are driven to a consensus almost surely along the proposed PQP algorithm, and that the expected qubit density operators converge to the average of the network's initial values.
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
Breakdowns in communication of radiological findings: an ethical and medico-legal conundrum
Murphy, Daniel R.; Singh, Hardeep
2016-01-01
Communication problems in diagnostic testing have increased in both number and importance in recent years. The medical and legal impact of failure of communication is dramatic. Over the past decades, the courts have expanded and strengthened the duty imposed on radiologists to timely communicate radiologic abnormalities to referring physicians and perhaps the patients themselves in certain situations. The need to communicate these findings goes beyond strict legal requirements: there is a moral imperative as well. The Code of Medical Ethics of the American Medical Association points out that “Ethical values and legal principles are usually closely related, but ethical obligations typically exceed legal duties.” Thus, from the perspective of the law, radiologists are required to communicate important unexpected findings to referring physicians in a timely fashion, or alternatively to the patients themselves. From a moral perspective, radiologists should want to effect such communications. Practice standards, moral values, and ethical statements from professional medical societies call for full disclosure of medical errors to patients affected by them. Surveys of radiologists and non-radiologic physicians reveal that only few would divulge all aspects of the error to the patient. In order to encourage physicians to disclose errors to patients and assist in protecting them in some manner if malpractice litigation follows, more than 35 states have passed laws that do not allow a physician’s admission of an error and apologetic statements to be revealed in the courtroom. Whether such disclosure increases or decreases the likelihood of a medical malpractice lawsuit is unclear, but ethical and moral considerations enjoin physicians to disclose errors and offer apologies. PMID:27006891
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-07
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-01
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.