Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Error-Detecting Identification Codes for Algebra Students.
ERIC Educational Resources Information Center
Sutherland, David C.
1990-01-01
Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels
NASA Technical Reports Server (NTRS)
Moher, Michael L.; Lodge, John H.
1990-01-01
A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes
NASA Astrophysics Data System (ADS)
Marvian, Milad; Lidar, Daniel A.
2017-01-01
We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.
Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.
Marvian, Milad; Lidar, Daniel A
2017-01-20
We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Numerical ‘health check’ for scientific codes: the CADNA approach
NASA Astrophysics Data System (ADS)
Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.
2007-04-01
Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.
Error-correcting codes in computer arithmetic.
NASA Technical Reports Server (NTRS)
Massey, J. L.; Garcia, O. N.
1972-01-01
Summary of the most important results so far obtained in the theory of coding for the correction and detection of errors in computer arithmetic. Attempts to satisfy the stringent reliability demands upon the arithmetic unit are considered, and special attention is given to attempts to incorporate redundancy into the numbers themselves which are being processed so that erroneous results can be detected and corrected.
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Simultaneous message framing and error detection
NASA Technical Reports Server (NTRS)
Frey, A. H., Jr.
1968-01-01
Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.
Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Throughput of Coded Optical CDMA Systems with AND Detectors
NASA Astrophysics Data System (ADS)
Memon, Kehkashan A.; Umrani, Fahim A.; Umrani, A. W.; Umrani, Naveed A.
2012-09-01
Conventional detection techniques used in optical code-division multiple access (OCDMA) systems are not optimal and result in poor bit error rate performance. This paper analyzes the coded performance of optical CDMA systems with AND detectors for enhanced throughput efficiencies and improved error rate performance. The results show that the use of AND detectors significantly improve the performance of an optical channel.
MacWilliams Identity for M-Spotty Weight Enumerator
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Fujiwara, Eiji
M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Runtime Detection of C-Style Errors in UPC Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirkelbauer, P; Liao, C; Panas, T
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
NASA Technical Reports Server (NTRS)
Wade, Randall S.; Jones, Bailey
2009-01-01
A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").
Optimum Cyclic Redundancy Codes for Noisy Channels
NASA Technical Reports Server (NTRS)
Posner, E. C.; Merkey, P.
1986-01-01
Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.
Concurrent error detecting codes for arithmetic processors
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.
Software error data collection and categorization
NASA Technical Reports Server (NTRS)
Ostrand, T. J.; Weyuker, E. J.
1982-01-01
Software errors detected during development of an interactive special purpose editor system were studied. This product was followed during nine months of coding, unit testing, function testing, and system testing. A new error categorization scheme was developed.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
Enhanced fault-tolerant quantum computing in d-level systems.
Campbell, Earl T
2014-12-05
Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.
The role of the insula in intuitive expert bug detection in computer code: an fMRI study.
Castelhano, Joao; Duarte, Isabel C; Ferreira, Carlos; Duraes, Joao; Madeira, Henrique; Castelo-Branco, Miguel
2018-05-09
Software programming is a complex and relatively recent human activity, involving the integration of mathematical, recursive thinking and language processing. The neural correlates of this recent human activity are still poorly understood. Error monitoring during this type of task, requiring the integration of language, logical symbol manipulation and other mathematical skills, is particularly challenging. We therefore aimed to investigate the neural correlates of decision-making during source code understanding and mental manipulation in professional participants with high expertise. The present fMRI study directly addressed error monitoring during source code comprehension, expert bug detection and decision-making. We used C code, which triggers the same sort of processing irrespective of the native language of the programmer. We discovered a distinct role for the insula in bug monitoring and detection and a novel connectivity pattern that goes beyond the expected activation pattern evoked by source code understanding in semantic language and mathematical processing regions. Importantly, insula activity levels were critically related to the quality of error detection, involving intuition, as signalled by reported initial bug suspicion, prior to final decision and bug detection. Activity in this salience network (SN) region evoked by bug suspicion was predictive of bug detection precision, suggesting that it encodes the quality of the behavioral evidence. Connectivity analysis provided evidence for top-down circuit "reutilization" stemming from anterior cingulate cortex (BA32), a core region in the SN that evolved for complex error monitoring such as required for this type of recent human activity. Cingulate (BA32) and anterolateral (BA10) frontal regions causally modulated decision processes in the insula, which in turn was related to activity of math processing regions in early parietal cortex. In other words, earlier brain regions used during evolution for other functions seem to be reutilized in a top-down manner for a new complex function, in an analogous manner as described for other cultural creations such as reading and literacy.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
Investigation of an Optimum Detection Scheme for a Star-Field Mapping System
NASA Technical Reports Server (NTRS)
Aldridge, M. D.; Credeur, L.
1970-01-01
An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.
Influence of incident angle on the decoding in laser polarization encoding guidance
NASA Astrophysics Data System (ADS)
Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan
2009-07-01
Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
Fault-Tolerant Signal Processing Architectures with Distributed Error Control.
1985-01-01
Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and
Disjointness of Stabilizer Codes and Limitations on Fault-Tolerant Logical Gates
NASA Astrophysics Data System (ADS)
Jochym-O'Connor, Tomas; Kubica, Aleksander; Yoder, Theodore J.
2018-04-01
Stabilizer codes are among the most successful quantum error-correcting codes, yet they have important limitations on their ability to fault tolerantly compute. Here, we introduce a new quantity, the disjointness of the stabilizer code, which, roughly speaking, is the number of mostly nonoverlapping representations of any given nontrivial logical Pauli operator. The notion of disjointness proves useful in limiting transversal gates on any error-detecting stabilizer code to a finite level of the Clifford hierarchy. For code families, we can similarly restrict logical operators implemented by constant-depth circuits. For instance, we show that it is impossible, with a constant-depth but possibly geometrically nonlocal circuit, to implement a logical non-Clifford gate on the standard two-dimensional surface code.
Integrating automated structured analysis and design with Ada programming support environments
NASA Technical Reports Server (NTRS)
Hecht, Alan; Simmons, Andy
1986-01-01
Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.
Reed-Solomon Codes and the Deep Hole Problem
NASA Astrophysics Data System (ADS)
Keti, Matt
In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.
Error probability for RFID SAW tags with pulse position coding and peak-pulse detection.
Shmaliy, Yuriy S; Plessky, Victor; Cerda-Villafaña, Gustavo; Ibarra-Manzano, Oscar
2012-11-01
This paper addresses the code reading error probability (EP) in radio-frequency identification (RFID) SAW tags with pulse position coding (PPC) and peak-pulse detection. EP is found in a most general form, assuming M groups of codes with N slots each and allowing individual SNRs in each slot. The basic case of zero signal in all off-pulses and equal signals in all on-pulses is investigated in detail. We show that if a SAW-tag with PPC is designed such that the spurious responses are attenuated by more than 20 dB below on-pulses, then EP can be achieved at the level of 10(-8) (one false read per 108 readings) with SNR >17 dB for any reasonable M and N. The tag reader range is estimated as a function of the transmitted power and EP.
Pavone, Enea Francesco; Tieri, Gaetano; Rizza, Giulia; Tidoni, Emmanuele; Grisoni, Luigi; Aglioti, Salvatore Maria
2016-01-13
Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first- or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors. Detecting errors in other's actions is crucial for social functions, such as reading out and anticipating the intentions of others. Using immersive virtual reality and EEG recording, we explored how the brain of an onlooker reacted to the errors of an avatar seen from a first-person perspective. We found that mere observation of erroneous actions enhances electrocortical markers of error detection in the trials where human onlookers embodied the virtual character. Thus, the cerebral system for action monitoring is maximally activated when others' errors are coded as if they are one's own errors. The results have important implications for understanding how the brain can control the external world and thus creating new brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/360268-12$15.00/0.
Quantum steganography and quantum error-correction
NASA Astrophysics Data System (ADS)
Shaw, Bilal A.
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.
Fault-Tolerant Computing: An Overview
1991-06-01
Addison Wesley:, Reading, MA) 1984. [8] J. Wakerly , Error Detecting Codes, Self-Checking Circuits and Applications , (Elsevier North Holland, Inc.- New York... applicable to bit-sliced organi- zations of hardware. In the first time step, the normal computation is performed on the operands and the results...for error detection and fault tolerance in parallel processor systems while perform- ing specific computation-intensive applications [111. Contrary to
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Improvements in the MGA Code Provide Flexibility and Better Error Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhter, W D; Kerr, J
2005-05-26
The Multi-Group Analysis (MGA) code is widely used to determine nondestructively the relative isotopic abundances of plutonium by gamma-ray spectrometry. MGA users have expressed concern about the lack of flexibility and transparency in the code. Users often have to ask the code developers for modifications to the code to accommodate new measurement situations, such as additional peaks being present in the plutonium spectrum or expected peaks being absent. We are testing several new improvements to a prototype, general gamma-ray isotopic analysis tool with the intent of either revising or replacing the MGA code. These improvements will give the user themore » ability to modify, add, or delete the gamma- and x-ray energies and branching intensities used by the code in determining a more precise gain and in the determination of the relative detection efficiency. We have also fully integrated the determination of the relative isotopic abundances with the determination of the relative detection efficiency to provide a more accurate determination of the errors in the relative isotopic abundances. We provide details in this paper on these improvements and a comparison of results obtained with current versions of the MGA code.« less
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
Applications and error correction for adiabatic quantum optimization
NASA Astrophysics Data System (ADS)
Pudenz, Kristen
Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
Characterizing a four-qubit planar lattice for arbitrary error detection
NASA Astrophysics Data System (ADS)
Chow, Jerry M.; Srinivasan, Srikanth J.; Magesan, Easwar; Córcoles, A. D.; Abraham, David W.; Gambetta, Jay M.; Steffen, Matthias
2015-05-01
Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015].
The detection error of thermal test low-frequency cable based on M sequence correlation algorithm
NASA Astrophysics Data System (ADS)
Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin
2018-04-01
The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.
Implementing a strand of a scalable fault-tolerant quantum computing fabric.
Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M
2014-06-24
With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.
A fault-tolerant information processing concept for space vehicles.
NASA Technical Reports Server (NTRS)
Hopkins, A. L., Jr.
1971-01-01
A distributed fault-tolerant information processing system is proposed, comprising a central multiprocessor, dedicated local processors, and multiplexed input-output buses connecting them together. The processors in the multiprocessor are duplicated for error detection, which is felt to be less expensive than using coded redundancy of comparable effectiveness. Error recovery is made possible by a triplicated scratchpad memory in each processor. The main multiprocessor memory uses replicated memory for error detection and correction. Local processors use any of three conventional redundancy techniques: voting, duplex pairs with backup, and duplex pairs in independent subsystems.
Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L
2008-02-01
To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.
A Categorization of Dynamic Analyzers
NASA Technical Reports Server (NTRS)
Lujan, Michelle R.
1997-01-01
Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Validation of a general practice audit and data extraction tool.
Peiris, David; Agaliotis, Maria; Patel, Bindu; Patel, Anushka
2013-11-01
We assessed how accurately a common general practitioner (GP) audit tool extracts data from two software systems. First, pathology test codes were audited at 33 practices covering nine companies. Second, a manual audit of chronic disease data from 200 random patient records at two practices was compared with audit tool data. Pathology review: all companies assigned correct codes for cholesterol, creatinine and glycated haemoglobin; four companies assigned incorrect codes for albuminuria tests, precluding accurate detection with the audit tool. Case record review: there was strong agreement between the manual audit and the tool for all variables except chronic kidney disease diagnoses, which was due to a tool-related programming error. The audit tool accurately detected most chronic disease data in two GP record systems. The one exception, however, highlights the importance of surveillance systems to promptly identify errors. This will maximise potential for audit tools to improve healthcare quality.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-05-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
Association rule mining on grid monitoring data to detect error sources
NASA Astrophysics Data System (ADS)
Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin
2010-04-01
Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.
2007-03-01
32 4.4 Algorithm Pseudo - Code ...................................................................................34 4.5 WIND Interface With a...difference estimates of xc temporal derivatives, or by using a polynomial fit to the previous values of xc. 34 4.4 ALGORITHM PSEUDO - CODE Pseudo ...Phase Shift Keying DQPSK Differential Quadrature Phase Shift Keying EVM Error Vector Magnitude FFT Fast Fourier Transform FPGA Field Programmable
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-08-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
Disruption of hierarchical predictive coding during sleep
Strauss, Melanie; Sitt, Jacobo D.; King, Jean-Remi; Elbaz, Maxime; Azizi, Leila; Buiatti, Marco; Naccache, Lionel; van Wassenhove, Virginie; Dehaene, Stanislas
2015-01-01
When presented with an auditory sequence, the brain acts as a predictive-coding device that extracts regularities in the transition probabilities between sounds and detects unexpected deviations from these regularities. Does such prediction require conscious vigilance, or does it continue to unfold automatically in the sleeping brain? The mismatch negativity and P300 components of the auditory event-related potential, reflecting two steps of auditory novelty detection, have been inconsistently observed in the various sleep stages. To clarify whether these steps remain during sleep, we recorded simultaneous electroencephalographic and magnetoencephalographic signals during wakefulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including short-term (local) and long-term (global) regularities. The global response, reflected in the P300, vanished during sleep, in line with the hypothesis that it is a correlate of high-level conscious error detection. The local mismatch response remained across all sleep stages (N1, N2, and REM sleep), but with an incomplete structure; compared with wakefulness, a specific peak reflecting prediction error vanished during sleep. Those results indicate that sleep leaves initial auditory processing and passive sensory response adaptation intact, but specifically disrupts both short-term and long-term auditory predictive coding. PMID:25737555
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, Fredric A.; Gosselin, Stephen R.; Doctor, Steven R.
2013-04-22
This document describes a new method to determine whether the flaws in a particular reactor pressure vessel are consistent with the assumptions regarding the number and sizes of flaws used in the analyses that formed the technical justification basis for the new voluntary alternative Pressurized Thermal Shock (PTS) rule (Draft 10 CFR 50.61a). The new methodology addresses concerns regarding prior methodology because ASME Code Section XI examinations do not detect all fabrication flaws, they have higher detection performance for some flaw types, and there are flaw sizing errors always present (e.g., significant oversizing of small flaws and systematic under sizingmore » of larger flaws). The new methodology allows direct comparison of ASME Code Section XI examination results with values in the PTS draft rule Tables 2 and 3 in order to determine if the number and sizes of flaws detected by an ASME Code Section XI examination are consistent with those assumed in the probabilistic fracture mechanics calculations performed in support of the development of 10 CFR 50.61a.« less
Methods, media, and systems for detecting attack on a digital processing device
Stolfo, Salvatore J.; Li, Wei-Jen; Keromylis, Angelos D.; Androulaki, Elli
2014-07-22
Methods, media, and systems for detecting attack are provided. In some embodiments, the methods include: comparing at least part of a document to a static detection model; determining whether attacking code is included in the document based on the comparison of the document to the static detection model; executing at least part of the document; determining whether attacking code is included in the document based on the execution of the at least part of the document; and if attacking code is determined to be included in the document based on at least one of the comparison of the document to the static detection model and the execution of the at least part of the document, reporting the presence of an attack. In some embodiments, the methods include: selecting a data segment in at least one portion of an electronic document; determining whether the arbitrarily selected data segment can be altered without causing the electronic document to result in an error when processed by a corresponding program; in response to determining that the arbitrarily selected data segment can be altered, arbitrarily altering the data segment in the at least one portion of the electronic document to produce an altered electronic document; and determining whether the corresponding program produces an error state when the altered electronic document is processed by the corresponding program.
Methods, media, and systems for detecting attack on a digital processing device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolfo, Salvatore J.; Li, Wei-Jen; Keromytis, Angelos D.
Methods, media, and systems for detecting attack are provided. In some embodiments, the methods include: comparing at least part of a document to a static detection model; determining whether attacking code is included in the document based on the comparison of the document to the static detection model; executing at least part of the document; determining whether attacking code is included in the document based on the execution of the at least part of the document; and if attacking code is determined to be included in the document based on at least one of the comparison of the document tomore » the static detection model and the execution of the at least part of the document, reporting the presence of an attack. In some embodiments, the methods include: selecting a data segment in at least one portion of an electronic document; determining whether the arbitrarily selected data segment can be altered without causing the electronic document to result in an error when processed by a corresponding program; in response to determining that the arbitrarily selected data segment can be altered, arbitrarily altering the data segment in the at least one portion of the electronic document to produce an altered electronic document; and determining whether the corresponding program produces an error state when the altered electronic document is processed by the corresponding program.« less
Estimating actual evapotranspiration for forested sites: modifications to the Thornthwaite Model
Randall K. Kolka; Ann T. Wolf
1998-01-01
A previously coded version of the Thornthwaite water balance model was used to estimate annual actual evapotranspiration (AET) for 29 forested sites between 1900 and 1993 in the Upper Great Lakes area. Approximately 8 percent of the data sets calculated AET in error. Errors were detected in months when estimated AET was greater than potential evapotranspiration. Annual...
Monte Carlo tests of the ELIPGRID-PC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.
1979-01-01
Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
NASA Astrophysics Data System (ADS)
Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf
2016-11-01
This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
The use of self checks and voting in software error detection - An empirical study
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.
1990-01-01
The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.
Direct characterization of quantum dynamics with noisy ancilla
Dumitrescu, Eugene F.; Humble, Travis S.
2015-11-23
We present methods for the direct characterization of quantum dynamics (DCQD) in which both the principal and ancilla systems undergo noisy processes. Using a concatenated error detection code, we discriminate between located and unlocated errors on the principal system in what amounts to filtering of ancilla noise. The example of composite noise involving amplitude damping and depolarizing channels is used to demonstrate the method, while we find the rate of noise filtering is more generally dependent on code distance. Furthermore our results indicate the accuracy of quantum process characterization can be greatly improved while remaining within reach of current experimentalmore » capabilities.« less
NASA Technical Reports Server (NTRS)
Predina, Joseph P. (Inventor)
1989-01-01
A digital-to-synchro converter is provided where a binary input code specifies a desired shaft angle and where an resolver type position transducer is employed with additional circuitry to generate a shaft position error signal indicative of the angular difference between the desired shaft angle and the actual shaft angle. The additional circuitry corrects for known and calculated errors in the shaft position detection process and equipment.
Modular space vehicle boards, control software, reprogramming, and failure recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin
A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.
Technology and medication errors: impact in nursing homes.
Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis
2014-01-01
The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Statistical design and analysis for plant cover studies with multiple sources of observation errors
Wright, Wilson; Irvine, Kathryn M.; Warren, Jeffrey M .; Barnett, Jenny K.
2017-01-01
Effective wildlife habitat management and conservation requires understanding the factors influencing distribution and abundance of plant species. Field studies, however, have documented observation errors in visually estimated plant cover including measurements which differ from the true value (measurement error) and not observing a species that is present within a plot (detection error). Unlike the rapid expansion of occupancy and N-mixture models for analysing wildlife surveys, development of statistical models accounting for observation error in plants has not progressed quickly. Our work informs development of a monitoring protocol for managed wetlands within the National Wildlife Refuge System.Zero-augmented beta (ZAB) regression is the most suitable method for analysing areal plant cover recorded as a continuous proportion but assumes no observation errors. We present a model extension that explicitly includes the observation process thereby accounting for both measurement and detection errors. Using simulations, we compare our approach to a ZAB regression that ignores observation errors (naïve model) and an “ad hoc” approach using a composite of multiple observations per plot within the naïve model. We explore how sample size and within-season revisit design affect the ability to detect a change in mean plant cover between 2 years using our model.Explicitly modelling the observation process within our framework produced unbiased estimates and nominal coverage of model parameters. The naïve and “ad hoc” approaches resulted in underestimation of occurrence and overestimation of mean cover. The degree of bias was primarily driven by imperfect detection and its relationship with cover within a plot. Conversely, measurement error had minimal impacts on inferences. We found >30 plots with at least three within-season revisits achieved reasonable posterior probabilities for assessing change in mean plant cover.For rapid adoption and application, code for Bayesian estimation of our single-species ZAB with errors model is included. Practitioners utilizing our R-based simulation code can explore trade-offs among different survey efforts and parameter values, as we did, but tuned to their own investigation. Less abundant plant species of high ecological interest may warrant the additional cost of gathering multiple independent observations in order to guard against erroneous conclusions.
Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream
Egner, Tobias; Monti, Jim M.; Summerfield, Christopher
2014-01-01
Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999
Error floor behavior study of LDPC codes for concatenated codes design
NASA Astrophysics Data System (ADS)
Chen, Weigang; Yin, Liuguo; Lu, Jianhua
2007-11-01
Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Research Prototype: Automated Analysis of Scientific and Engineering Semantics
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Follen, Greg (Technical Monitor)
2001-01-01
Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less
NASA Technical Reports Server (NTRS)
1982-01-01
An effective data collection methodology for evaluating software development methodologies was applied to four different software development projects. Goals of the data collection included characterizing changes and errors, characterizing projects and programmers, identifying effective error detection and correction techniques, and investigating ripple effects. The data collected consisted of changes (including error corrections) made to the software after code was written and baselined, but before testing began. Data collection and validation were concurrent with software development. Changes reported were verified by interviews with programmers.
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1993-01-01
The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.
NASA Technical Reports Server (NTRS)
Presser, L.
1978-01-01
An integrated set of FORTRAN tools that are commercially available is described. The basic purpose of various tools is summarized and their economic impact highlighted. The areas addressed by these tools include: code auditing, error detection, program portability, program instrumentation, documentation, clerical aids, and quality assurance.
The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Shao, H. M.; Deutsch, L. J.
1987-01-01
The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design
1990-09-01
TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to
Reliable absolute analog code retrieval approach for 3D measurement
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun
2017-11-01
The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
Achieving the Heisenberg limit in quantum metrology using quantum error correction.
Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang
2018-01-08
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
A Review on Spectral Amplitude Coding Optical Code Division Multiple Access
NASA Astrophysics Data System (ADS)
Kaur, Navpreet; Goyal, Rakesh; Rani, Monika
2017-06-01
This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Mathematical fundamentals for the noise immunity of the genetic code.
Fimmel, Elena; Strüngmann, Lutz
2018-02-01
Symmetry is one of the essential and most visible patterns that can be seen in nature. Starting from the left-right symmetry of the human body, all types of symmetry can be found in crystals, plants, animals and nature as a whole. Similarly, principals of symmetry are also some of the fundamental and most useful tools in modern mathematical natural science that play a major role in theory and applications. As a consequence, it is not surprising that the desire to understand the origin of life, based on the genetic code, forces us to involve symmetry as a mathematical concept. The genetic code can be seen as a key to biological self-organisation. All living organisms have the same molecular bases - an alphabet consisting of four letters (nitrogenous bases): adenine, cytosine, guanine, and thymine. Linearly ordered sequences of these bases contain the genetic information for synthesis of proteins in all forms of life. Thus, one of the most fascinating riddles of nature is to explain why the genetic code is as it is. Genetic coding possesses noise immunity which is the fundamental feature that allows to pass on the genetic information from parents to their descendants. Hence, since the time of the discovery of the genetic code, scientists have tried to explain the noise immunity of the genetic information. In this chapter we will discuss recent results in mathematical modelling of the genetic code with respect to noise immunity, in particular error-detection and error-correction. We will focus on two central properties: Degeneracy and frameshift correction. Different amino acids are encoded by different quantities of codons and a connection between this degeneracy and the noise immunity of genetic information is a long standing hypothesis. Biological implications of the degeneracy have been intensively studied and whether the natural code is a frozen accident or a highly optimised product of evolution is still controversially discussed. Symmetries in the structure of degeneracy of the genetic code are essential and give evidence of substantial advantages of the natural code over other possible ones. In the present chapter we will present a recent approach to explain the degeneracy of the genetic code by algorithmic methods from bioinformatics, and discuss its biological consequences. The biologists recognised this problem immediately after the detection of the non-overlapping structure of the genetic code, i.e., coding sequences are to be read in a unique way determined by their reading frame. But how does the reading head of the ribosome recognises an error in the grouping of codons, caused by e.g. insertion or deletion of a base, that can be fatal during the translation process and may result in nonfunctional proteins? In this chapter we will discuss possible solutions to the frameshift problem with a focus on the theory of so-called circular codes that were discovered in large gene populations of prokaryotes and eukaryotes in the early 90s. Circular codes allow to detect a frameshift of one or two positions and recently a beautiful theory of such codes has been developed using statistics, group theory and graph theory. Copyright © 2017 Elsevier B.V. All rights reserved.
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Abu-Almaalie, Zina; Ghassemlooy, Zabih; Bhatnagar, Manav R; Le-Minh, Hoa; Aslam, Nauman; Liaw, Shien-Kuei; Lee, It Ee
2016-11-20
Physical layer network coding (PNC) improves the throughput in wireless networks by enabling two nodes to exchange information using a minimum number of time slots. The PNC technique is proposed for two-way relay channel free space optical (TWR-FSO) communications with the aim of maximizing the utilization of network resources. The multipair TWR-FSO is considered in this paper, where a single antenna on each pair seeks to communicate via a common receiver aperture at the relay. Therefore, chip interleaving is adopted as a technique to separate the different transmitted signals at the relay node to perform PNC mapping. Accordingly, this scheme relies on the iterative multiuser technique for detection of users at the receiver. The bit error rate (BER) performance of the proposed system is examined under the combined influences of atmospheric loss, turbulence-induced channel fading, and pointing errors (PEs). By adopting the joint PNC mapping with interleaving and multiuser detection techniques, the BER results show that the proposed scheme can achieve a significant performance improvement against the degrading effects of turbulences and PEs. It is also demonstrated that a larger number of simultaneous users can be supported with this new scheme in establishing a communication link between multiple pairs of nodes in two time slots, thereby improving the channel capacity.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang; Hsu, Yi-Kai
2017-03-01
Three-arm dual-balanced detection scheme is studied in an optical code division multiple access system. As the MAI and beat noise are the main deleterious source of system performance, we utilize optical hard-limiters to alleviate such channel impairment. In addition, once the channel condition is improved effectively, the proposed two-dimensional error correction code can remarkably enhance the system performance. In our proposed scheme, the optimal thresholds of optical hard-limiters and decision circuitry are fixed, and they will not change with other system parameters. Our proposed scheme can accommodate a large number of users simultaneously and is suitable for burst traffic with asynchronous transmission. Therefore, it is highly recommended as the platform for broadband optical access network.
Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)
2014-10-30
to single photon detection, at comparable detection efficiencies. On the other hand, error-correction codes are better developed for small-alphabet...protocol is several orders of magnitude better than the Shapiro protocol, which needs entangled states. The bits/mode performance achieved by our...putting together a software tool implemented in MATLAB , which talks to the MODTRAN database via an intermediate numerical dump of transmission data
1982-10-01
e.g., providing voters in TMR systems and detection-switching requirements in standby-sparing sys- tems. The application of mathematical thoery of...and time redundancy required for error detection and correction, are interrelated. Mathematical modeling, when applied to fault tolerant systems, can...9 1.1 Some Fundamental Principles............................. 11 1.2 Mathematical Theory of
Judging the judges' performance in rhythmic gymnastics.
Flessas, Konstantinos; Mylonas, Dimitris; Panagiotaropoulou, Georgia; Tsopani, Despina; Korda, Alexandrea; Siettos, Constantinos; Di Cagno, Alessandra; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2015-03-01
Rhythmic gymnastics (RG) is an aesthetic event balancing between art and sport that also has a performance rating system (Code of Points) given by the International Gymnastics Federation. It is one of the sports in which competition results greatly depend on the judges' evaluation. In the current study, we explored the judges' performance in a five-gymnast ensemble routine. An expert-novice paradigm (10 international-level, 10 national-level, and 10 novice-level judges) was implemented under a fully simulated procedure of judgment in a five-gymnast ensemble routine of RG using two videos of routines performed by the Greek national team of RG. Simultaneous recordings of two-dimensional eye movements were taken during the judgment procedure to assess the percentage of time spent by each judge viewing the videos and fixation performance of each judge when an error in gymnast performance had occurred. All judge level groups had very modest performance of error recognition on gymnasts' routines, and the best international judges reported approximately 40% of true errors. Novice judges spent significantly more time viewing the videos compared with national and international judges and spent significantly more time fixating detected errors than the other two groups. National judges were the only group that made efficient use of fixation to detect errors. The fact that international-level judges outperformed both other groups, while not relying on visual fixation to detect errors, suggests that these experienced judges probably make use of other cognitive strategies, increasing their overall error detection efficiency, which was, however, still far below optimum.
Time trend of injection drug errors before and after implementation of bar-code verification system.
Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki
2015-01-01
Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.
Coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.
NASA Electronic Library System (NELS) optimization
NASA Technical Reports Server (NTRS)
Pribyl, William L.
1993-01-01
This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.
Fault tolerant computing: A preamble for assuring viability of large computer systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1977-01-01
The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.
Implementation of an experimental fault-tolerant memory system
NASA Technical Reports Server (NTRS)
Carter, W. C.; Mccarthy, C. E.
1976-01-01
The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
Reed Solomon codes for error control in byte organized computer memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
47 CFR 101.521 - Spectrum utilization.
Code of Federal Regulations, 2010 CFR
2010-10-01
... applicants for DEMS frequencies in the 10.6 GHz band must submit as part of the original application a... contain detailed descriptions of the modulation method, the channel time sharing method, any error detecting and/or correcting codes, any spatial frequency reuse system and the total data throughput capacity...
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
Goto, Hayato
2014-01-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gorjala, Bhargavi
1991-01-01
Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAGEE,GLEN I.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less
Yuan, Mingquan; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu
2016-10-01
This paper extends our previous work on silver-enhancement based self-assembling structures for designing reliable, self-powered biosensors with forward error correcting (FEC) capability. At the core of the proposed approach is the integration of paper-based microfluidics with quick response (QR) codes that can be optically scanned using a smart-phone. The scanned information is first decoded to obtain the location of a web-server which further processes the self-assembled QR image to determine the concentration of target analytes. The integration substrate for the proposed FEC biosensor is polyethylene and the patterning of the QR code on the substrate has been achieved using a combination of low-cost ink-jet printing and a regular ballpoint dispensing pen. A paper-based microfluidics channel has been integrated underneath the substrate for acquiring, mixing and flowing the sample to areas on the substrate where different parts of the code can self-assemble in presence of immobilized gold nanorods. In this paper we demonstrate the proof-of-concept detection using prototypes of QR encoded FEC biosensors.
High-density digital recording
NASA Technical Reports Server (NTRS)
Kalil, F. (Editor); Buschman, A. (Editor)
1985-01-01
The problems associated with high-density digital recording (HDDR) are discussed. Five independent users of HDDR systems and their problems, solutions, and insights are provided as guidance for other users of HDDR systems. Various pulse code modulation coding techniques are reviewed. An introduction to error detection and correction head optimization theory and perpendicular recording are provided. Competitive tape recorder manufacturers apply all of the above theories and techniques and present their offerings. The methodology used by the HDDR Users Subcommittee of THIC to evaluate parallel HDDR systems is presented.
Modulation and coding for throughput-efficient optical free-space links
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.
1993-01-01
Optical direct-detection systems are currently being considered for some high-speed inter-satellite links, where data-rates of a few hundred megabits per second are evisioned under power and pulsewidth constraints. In this paper we investigate the capacity, cutoff-rate and error-probability performance of uncoded and trellis-coded systems for various modulation schemes and under various throughput and power constraints. Modulation schemes considered are on-off keying (OOK), pulse-position modulation (PPM), overlapping PPM (OPPM) and multi-pulse (combinatorial) PPM (MPPM).
Analysis of quantum error correction with symmetric hypergraph states
NASA Astrophysics Data System (ADS)
Wagner, T.; Kampermann, H.; Bruß, D.
2018-03-01
Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.
Cracking the code: the accuracy of coding shoulder procedures and the repercussions.
Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M
2013-05-01
Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p < 0.0001) and the correct procedure code (odds ratio 310.0, p < 0.0001). Using the proforma resulted in a £28,562 increase in revenue for the 100 patients evaluated relative to the income generated from the coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar
2010-04-01
Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
NASA Astrophysics Data System (ADS)
Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.
2010-04-01
A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.
The decoding of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.
1988-01-01
Reed-Solomon (RS) codes form an important part of the high-rate downlink telemetry system for the Magellan mission, and the RS decoding function for this project will be done by DSN. Although the basic idea behind all Reed-Solomon decoding algorithms was developed by Berlekamp in 1968, there are dozens of variants of Berlekamp's algorithm in current use. An attempt to restore order is made by presenting a mathematical theory which explains the working of almost all known RS decoding algorithms. The key innovation that makes this possible is the unified approach to the solution of the key equation, which simultaneously describes the Berlekamp, Berlekamp-Massey, Euclid, and continued fractions approaches. Additionally, a detailed analysis is made of what can happen to a generic RS decoding algorithm when the number of errors and erasures exceeds the code's designed correction capability, and it is shown that while most published algorithms do not detect as many of these error-erasure patterns as possible, by making a small change in the algorithms, this problem can be overcome.
NASA Astrophysics Data System (ADS)
Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.
2018-01-01
The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min
2018-03-01
Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.
NASA Astrophysics Data System (ADS)
Han, Xifeng; Zhou, Wen
2018-03-01
Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Fault Tolerance Middleware for a Multi-Core System
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.
2012-01-01
Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.
The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error
Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G
2012-01-01
Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908
A New Internet Tool for Automatic Evaluation in Control Systems and Programming
ERIC Educational Resources Information Center
Munoz de la Pena, D.; Gomez-Estern, F.; Dormido, S.
2012-01-01
In this paper we present a web-based innovative education tool designed for automating the collection, evaluation and error detection in practical exercises assigned to computer programming and control engineering students. By using a student/instructor code-fusion architecture, the conceptual limits of multiple-choice tests are overcome by far.…
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-correcting codes on scale-free networks
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Ko, Young-Jo
2004-06-01
We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.
Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N
2012-01-01
Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
More on the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1987-01-01
The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.
Error suppression via complementary gauge choices in Reed-Muller codes
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Jochym-O'Connor, Tomas
2017-09-01
Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.
Decodoku: Quantum error rorrection as a simple puzzle game
NASA Astrophysics Data System (ADS)
Wootton, James
To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.
Effects of a direct refill program for automated dispensing cabinets on medication-refill errors.
Helmons, Pieter J; Dalton, Ashley J; Daniels, Charles E
2012-10-01
The effects of a direct refill program for automated dispensing cabinets (ADCs) on medication-refill errors were studied. This study was conducted in designated acute care areas of a 386-bed academic medical center. A wholesaler-to-ADC direct refill program, consisting of prepackaged delivery of medications and bar-code-assisted ADC refilling, was implemented in the inpatient pharmacy of the medical center in September 2009. Medication-refill errors in 26 ADCs from the general medicine units, the infant special care unit, the surgical and burn intensive care units, and intermediate units were assessed before and after the implementation of this program. Medication-refill errors were defined as an ADC pocket containing the wrong drug, wrong strength, or wrong dosage form. ADC refill errors decreased by 77%, from 62 errors per 6829 refilled pockets (0.91%) to 8 errors per 3855 refilled pockets (0.21%) (p < 0.0001). The predominant error type detected before the intervention was the incorrect medication (wrong drug, wrong strength, or wrong dosage form) in the ADC pocket. Of the 54 incorrect medications found before the intervention, 38 (70%) were loaded in a multiple-drug drawer. After the implementation of the new refill process, 3 of the 5 incorrect medications were loaded in a multiple-drug drawer. There were 3 instances of expired medications before and only 1 expired medication after implementation of the program. A redesign of the ADC refill process using a wholesaler-to-ADC direct refill program that included delivery of prepackaged medication and bar-code-assisted refill significantly decreased the occurrence of ADC refill errors.
NASA Astrophysics Data System (ADS)
Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.
2011-06-01
In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.
Comparison of methods for accurate end-point detection of potentiometric titrations
NASA Astrophysics Data System (ADS)
Villela, R. L. A.; Borges, P. P.; Vyskočil, L.
2015-01-01
Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.
Hutton, Kevin; Ding, Qian; Wellman, Gregory
2017-02-24
The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.
Novel space-time trellis codes for free-space optical communications using transmit laser selection.
García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2015-09-21
In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Error threshold for color codes and random three-body Ising models.
Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A
2009-08-28
We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.
Using mobile devices to improve the safety of medication administration processes.
Navas, H; Graffi Moltrasio, L; Ares, F; Strumia, G; Dourado, E; Alvarez, M
2015-01-01
Within preventable medical errors, those related to medications are frequent in every stage of the prescribing cycle. Nursing is responsible for maintaining each patients safety and care quality. Moreover, nurses are the last people who can detect an error in medication before its administration. Medication administration is one of the riskiest tasks in nursing. The use of information and communication technologies is related to a decrease in these errors. Including mobile devices related to 2D code reading of patients and medication will decrease the possibility of error when preparing and administering medication by nurses. A cross-platform software (iOS and Android) was developed to ensure the five Rights of the medication administration process (patient, medication, dose, route and schedule). Deployment in November showed 39% use.
Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes
NASA Astrophysics Data System (ADS)
Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team
We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.
Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.
2007-01-01
Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
Haghighi, Mohammad Hosein Hayavi; Dehghani, Mohammad; Teshnizi, Saeid Hoseini; Mahmoodi, Hamid
2014-01-01
Accurate cause of death coding leads to organised and usable death information but there are some factors that influence documentation on death certificates and therefore affect the coding. We reviewed the role of documentation errors on the accuracy of death coding at Shahid Mohammadi Hospital (SMH), Bandar Abbas, Iran. We studied the death certificates of all deceased patients in SMH from October 2010 to March 2011. Researchers determined and coded the underlying cause of death on the death certificates according to the guidelines issued by the World Health Organization in Volume 2 of the International Statistical Classification of Diseases and Health Related Problems-10th revision (ICD-10). Necessary ICD coding rules (such as the General Principle, Rules 1-3, the modification rules and other instructions about death coding) were applied to select the underlying cause of death on each certificate. Demographic details and documentation errors were then extracted. Data were analysed with descriptive statistics and chi square tests. The accuracy rate of causes of death coding was 51.7%, demonstrating a statistically significant relationship (p=.001) with major errors but not such a relationship with minor errors. Factors that result in poor quality of Cause of Death coding in SMH are lack of coder training, documentation errors and the undesirable structure of death certificates.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il
2014-08-15
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver
NASA Astrophysics Data System (ADS)
Nazrul Islam, A. K. M.; Majumder, S. P.
2017-12-01
An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu
2015-01-01
To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.
NASA Technical Reports Server (NTRS)
Degaudenzi, R.; Elia, C.; Viola, R.
1990-01-01
Discussed here is a new approach to code division multiple access applied to a mobile system for voice (and data) services based on Band Limited Quasi Synchronous Code Division Multiple Access (BLQS-CDMA). The system requires users to be chip synchronized to reduce the contribution of self-interference and to make use of voice activation in order to increase the satellite power efficiency. In order to achieve spectral efficiency, Nyquist chip pulse shaping is used with no detection performance impairment. The synchronization problems are solved in the forward link by distributing a master code, whereas carrier forced activation and closed loop control techniques have been adopted in the return link. System performance sensitivity to nonlinear amplification and timing/frequency synchronization errors are analyzed.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
(Quickly) Testing the Tester via Path Coverage
NASA Technical Reports Server (NTRS)
Groce, Alex
2009-01-01
The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a "quick and dirty" method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.
Program Instrumentation and Trace Analysis
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)
2002-01-01
Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
EAC: A program for the error analysis of STAGS results for plates
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.
1989-01-01
A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.
Is a Genome a Codeword of an Error-Correcting Code?
Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo
2012-01-01
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495
Hybrid optical CDMA-FSO communications network under spatially correlated gamma-gamma scintillation.
Jurado-Navas, Antonio; Raddo, Thiago R; Garrido-Balsells, José María; Borges, Ben-Hur V; Olmos, Juan José Vegas; Monroy, Idelfonso Tafur
2016-07-25
In this paper, we propose a new hybrid network solution based on asynchronous optical code-division multiple-access (OCDMA) and free-space optical (FSO) technologies for last-mile access networks, where fiber deployment is impractical. The architecture of the proposed hybrid OCDMA-FSO network is thoroughly described. The users access the network in a fully asynchronous manner by means of assigned fast frequency hopping (FFH)-based codes. In the FSO receiver, an equal gain-combining technique is employed along with intensity modulation and direct detection. New analytical formalisms for evaluating the average bit error rate (ABER) performance are also proposed. These formalisms, based on the spatially correlated gamma-gamma statistical model, are derived considering three distinct scenarios, namely, uncorrelated, totally correlated, and partially correlated channels. Numerical results show that users can successfully achieve error-free ABER levels for the three scenarios considered as long as forward error correction (FEC) algorithms are employed. Therefore, OCDMA-FSO networks can be a prospective alternative to deliver high-speed communication services to access networks with deficient fiber infrastructure.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
Quantum error-correcting codes from algebraic geometry codes of Castle type
NASA Astrophysics Data System (ADS)
Munuera, Carlos; Tenório, Wanderson; Torres, Fernando
2016-10-01
We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.
Short-Block Protograph-Based LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher
2010-01-01
Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
A decoding procedure for the Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Lim, R. S.
1978-01-01
A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
NASA Astrophysics Data System (ADS)
Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi
This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
Writing executable assertions to test flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
Redwan, R M; Saidin, A; Kumar, S V
2015-08-12
Pineapple (Ananas comosus var. comosus) is known as the king of fruits for its crown and is the third most important tropical fruit after banana and citrus. The plant, which is indigenous to South America, is the most important species in the Bromeliaceae family and is largely traded for fresh fruit consumption. Here, we report the complete chloroplast sequence of the MD-2 pineapple that was sequenced using the PacBio sequencing technology. In this study, the high error rate of PacBio long sequence reads of A. comosus's total genomic DNA were improved by leveraging on the high accuracy but short Illumina reads for error-correction via the latest error correction module from Novocraft. Error corrected long PacBio reads were assembled by using a single tool to produce a contig representing the pineapple chloroplast genome. The genome of 159,636 bp in length is featured with the conserved quadripartite structure of chloroplast containing a large single copy region (LSC) with a size of 87,482 bp, a small single copy region (SSC) with a size of 18,622 bp and two inverted repeat regions (IRA and IRB) each with the size of 26,766 bp. Overall, the genome contained 117 unique coding regions and 30 were repeated in the IR region with its genes contents, structure and arrangement similar to its sister taxon, Typha latifolia. A total of 35 repeats structure were detected in both the coding and non-coding regions with a majority being tandem repeats. In addition, 205 SSRs were detected in the genome with six protein-coding genes contained more than two SSRs. Comparative chloroplast genomes from the subclass Commelinidae revealed a conservative protein coding gene albeit located in a highly divergence region. Analysis of selection pressure on protein-coding genes using Ka/Ks ratio showed significant positive selection exerted on the rps7 gene of the pineapple chloroplast with P less than 0.05. Phylogenetic analysis confirmed the recent taxonomical relation among the member of commelinids which support the monophyly relationship between Arecales and Dasypogonaceae and between Zingiberales to the Poales, which includes the A. comosus. The complete sequence of the chloroplast of pineapple provides insights to the divergence of genic chloroplast sequences from the members of the subclass Commelinidae. The complete pineapple chloroplast will serve as a reference for in-depth taxonomical studies in the Bromeliaceae family when more species under the family are sequenced in the future. The genetic sequence information will also make feasible other molecular applications of the pineapple chloroplast for plant genetic improvement.
Predictive Coding or Evidence Accumulation? False Inference and Neuronal Fluctuations
Friston, Karl J.; Kleinschmidt, Andreas
2010-01-01
Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se. PMID:20369004
Hétu, Sébastien; Luo, Yi; D’Ardenne, Kimberlee; Lohrenz, Terry
2017-01-01
Abstract As models of shared expectations, social norms play an essential role in our societies. Since our social environment is changing constantly, our internal models of it also need to change. In humans, there is mounting evidence that neural structures such as the insula and the ventral striatum are involved in detecting norm violation and updating internal models. However, because of methodological challenges, little is known about the possible involvement of midbrain structures in detecting norm violation and updating internal models of our norms. Here, we used high-resolution cardiac-gated functional magnetic resonance imaging and a norm adaptation paradigm in healthy adults to investigate the role of the substantia nigra/ventral tegmental area (SN/VTA) complex in tracking signals related to norm violation that can be used to update internal norms. We show that the SN/VTA codes for the norm’s variance prediction error (PE) and norm PE with spatially distinct regions coding for negative and positive norm PE. These results point to a common role played by the SN/VTA complex in supporting both simple reward-based and social decision making. PMID:28981876
Improving accuracy of clinical coding in surgery: collaboration is key.
Heywood, Nick A; Gill, Michael D; Charlwood, Natasha; Brindle, Rachel; Kirwan, Cliona C
2016-08-01
Clinical coding data provide the basis for Hospital Episode Statistics and Healthcare Resource Group codes. High accuracy of this information is required for payment by results, allocation of health and research resources, and public health data and planning. We sought to identify the level of accuracy of clinical coding in general surgical admissions across hospitals in the Northwest of England. Clinical coding departments identified a total of 208 emergency general surgical patients discharged between 1st March and 15th August 2013 from seven hospital trusts (median = 20, range = 16-60). Blinded re-coding was performed by a senior clinical coder and clinician, with results compared with the original coding outcome. Recorded codes were generated from OPCS-4 & ICD-10. Of all cases, 194 of 208 (93.3%) had at least one coding error and 9 of 208 (4.3%) had errors in both primary diagnosis and primary procedure. Errors were found in 64 of 208 (30.8%) of primary diagnoses and 30 of 137 (21.9%) of primary procedure codes. Median tariff using original codes was £1411.50 (range, £409-9138). Re-calculation using updated clinical codes showed a median tariff of £1387.50, P = 0.997 (range, £406-10,102). The most frequent reasons for incorrect coding were "coder error" and a requirement for "clinical interpretation of notes". Errors in clinical coding are multifactorial and have significant impact on primary diagnosis, potentially affecting the accuracy of Hospital Episode Statistics data and in turn the allocation of health care resources and public health planning. As we move toward surgeon specific outcomes, surgeons should increase collaboration with coding departments to ensure the system is robust. Copyright © 2016 Elsevier Inc. All rights reserved.
Tutorial on Reed-Solomon error correction coding
NASA Technical Reports Server (NTRS)
Geisel, William A.
1990-01-01
This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.
Tailored Codes for Small Quantum Memories
NASA Astrophysics Data System (ADS)
Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.
2017-12-01
We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.
Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed
2018-01-25
The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.
NP-hardness of decoding quantum error-correction codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Biometrics encryption combining palmprint with two-layer error correction codes
NASA Astrophysics Data System (ADS)
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results
NASA Astrophysics Data System (ADS)
Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.
2005-12-01
We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.
Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization
NASA Astrophysics Data System (ADS)
Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin
2005-02-01
This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.
Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark
2018-07-01
The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana
The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
Baum, John M; Monhaut, Nanette M; Parker, Donald R; Price, Christopher P
2006-06-01
Two independent studies reported that 16% of people who self-monitor blood glucose used incorrectly coded meters. The degree of analytical error, however, was not characterized. Our study objectives were to demonstrate that miscoding can cause analytical errors and to characterize the potential amount of bias that can occur. The impact of calibration error with three selfblood glucose monitoring systems (BGMSs), one of which has an autocoding feature, is reported. Fresh capillary fingerstick blood from 50 subjects, 18 men and 32 women ranging in age from 23 to 82 years, was used to measure glucose with three BGMSs. Two BGMSs required manual coding and were purposely miscoded using numbers different from the one recommended for the reagent lot used. Two properly coded meters of each BGMS were included to assess within-system variability. Different reagent lots were used to challenge a third system that had autocoding capability and could not be miscoded. Some within-system comparisons showed deviations of greater than +/-30% when results obtained with miscoded meters were compared with data obtained with ones programmed using the correct code number. Similar erroneous results were found when the miscoded meter results were compared with those obtained with a glucose analyzer. For some miscoded meter and test strip combinations, error grid analysis showed that 90% of results fell into zones indicating altered clinical action. Such inaccuracies were not found with the BGMS having the autocoding feature. When certain meter code number settings of two BGMSs were used in conjunction with test strips having code numbers that did not match, statistically and clinically inaccurate results were obtained. Coding errors resulted in analytical errors of greater than +/-30% (-31.6 to +60.9%). These results confirm the value of a BGMS with an automatic coding feature.
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Error-Rate Bounds for Coded PPM on a Poisson Channel
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
RACER: Effective Race Detection Using AspectJ
NASA Technical Reports Server (NTRS)
Bodden, Eric; Havelund, Klaus
2008-01-01
The limits of coding with joint constraints on detected and undetected error rates Programming errors occur frequently in large software systems, and even more so if these systems are concurrent. In the past, researchers have developed specialized programs to aid programmers detecting concurrent programming errors such as deadlocks, livelocks, starvation and data races. In this work we propose a language extension to the aspect-oriented programming language AspectJ, in the form of three new built-in pointcuts, lock(), unlock() and may be Shared(), which allow programmers to monitor program events where locks are granted or handed back, and where values are accessed that may be shared amongst multiple Java threads. We decide thread-locality using a static thread-local objects analysis developed by others. Using the three new primitive pointcuts, researchers can directly implement efficient monitoring algorithms to detect concurrent programming errors online. As an example, we expose a new algorithm which we call RACER, an adoption of the well-known ERASER algorithm to the memory model of Java. We implemented the new pointcuts as an extension to the Aspect Bench Compiler, implemented the RACER algorithm using this language extension and then applied the algorithm to the NASA K9 Rover Executive. Our experiments proved our implementation very effective. In the Rover Executive RACER finds 70 data races. Only one of these races was previously known.We further applied the algorithm to two other multi-threaded programs written by Computer Science researchers, in which we found races as well.
Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin
2012-01-01
The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.
Performance and structure of single-mode bosonic codes
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
2018-03-01
The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel
2013-03-01
In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Measuring and Specifying Combinatorial Coverage of Test Input Configurations
Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu
2015-01-01
A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442
Seligman, Sarah C; Giovannetti, Tania; Sestito, John; Libon, David J
2014-01-01
Mild functional difficulties have been associated with early cognitive decline in older adults and increased risk for conversion to dementia in mild cognitive impairment, but our understanding of this decline has been limited by a dearth of objective methods. This study evaluated the reliability and validity of a new system to code subtle errors on an established performance-based measure of everyday action and described preliminary findings within the context of a theoretical model of action disruption. Here 45 older adults completed the Naturalistic Action Test (NAT) and neuropsychological measures. NAT performance was coded for overt errors, and subtle action difficulties were scored using a novel coding system. An inter-rater reliability coefficient was calculated. Validity of the coding system was assessed using a repeated-measures ANOVA with NAT task (simple versus complex) and error type (overt versus subtle) as within-group factors. Correlation/regression analyses were conducted among overt NAT errors, subtle NAT errors, and neuropsychological variables. The coding of subtle action errors was reliable and valid, and episodic memory breakdown predicted subtle action disruption. Results suggest that the NAT can be useful in objectively assessing subtle functional decline. Treatments targeting episodic memory may be most effective in addressing early functional impairment in older age.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
Error-correcting pairs for a public-key cryptosystem
NASA Astrophysics Data System (ADS)
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-06-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Sayler, Elaine; Eldredge-Hindy, Harriet; Dinome, Jessie; Lockamy, Virginia; Harrison, Amy S
2015-01-01
The planning procedure for Valencia and Leipzig surface applicators (VLSAs) (Nucletron, Veenendaal, The Netherlands) differs substantially from CT-based planning; the unfamiliarity could lead to significant errors. This study applies failure modes and effects analysis (FMEA) to high-dose-rate (HDR) skin brachytherapy using VLSAs to ensure safety and quality. A multidisciplinary team created a protocol for HDR VLSA skin treatments and applied FMEA. Failure modes were identified and scored by severity, occurrence, and detectability. The clinical procedure was then revised to address high-scoring process nodes. Several key components were added to the protocol to minimize risk probability numbers. (1) Diagnosis, prescription, applicator selection, and setup are reviewed at weekly quality assurance rounds. Peer review reduces the likelihood of an inappropriate treatment regime. (2) A template for HDR skin treatments was established in the clinic's electronic medical record system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planner as well as increases the detectability of an error. (3) A screen check was implemented during the second check to increase detectability of an error. (4) To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display, facilitating data entry and verification. (5) VLSAs are color coded and labeled to match the electronic medical record prescriptions, simplifying in-room selection and verification. Multidisciplinary planning and FMEA increased detectability and reduced error probability during VLSA HDR brachytherapy. This clinical model may be useful to institutions implementing similar procedures. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.
We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less
SSL/TLS Vulnerability Detection Using Black Box Approach
NASA Astrophysics Data System (ADS)
Gunawan, D.; Sitorus, E. H.; Rahmat, R. F.; Hizriadi, A.
2018-03-01
Socket Secure Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols that provide data encryption to secure the communication over a network. However, in some cases, there are vulnerability found in the implementation of SSL/TLS because of weak cipher key, certificate validation error or session handling error. One of the most vulnerable SSL/TLS bugs is heartbleed. As the security is essential in data communication, this research aims to build a scanner that detect the SSL/TLS vulnerability by using black box approach. This research will focus on heartbleed case. In addition, this research also gathers information about existing SSL in the server. The black box approach is used to test the output of a system without knowing the process inside the system itself. For testing purpose, this research scanned websites and found that some of the websites still have SSL/TLS vulnerability. Thus, the black box approach can be used to detect the vulnerability without considering the source code and the process inside the application.
Array coding for large data memories
NASA Technical Reports Server (NTRS)
Tranter, W. H.
1982-01-01
It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.
Waheed, Khuram; Salem, Fathi M
2005-07-01
Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
Mull, Hillary J; Borzecki, Ann M; Loveland, Susan; Hickson, Kathleen; Chen, Qi; MacDonald, Sally; Shin, Marlena H; Cevasco, Marisa; Itani, Kamal M F; Rosen, Amy K
2014-04-01
The Patient Safety Indicators (PSIs) use administrative data to screen for select adverse events (AEs). In this study, VA Surgical Quality Improvement Program (VASQIP) chart review data were used as the gold standard to measure the criterion validity of 5 surgical PSIs. Independent chart review was also used to determine reasons for PSI errors. The sensitivity, specificity, and positive predictive value of PSI software version 4.1a were calculated among Veterans Health Administration hospitalizations (2003-2007) reviewed by VASQIP (n = 268,771). Nurses re-reviewed a sample of hospitalizations for which PSI and VASQIP AE detection disagreed. Sensitivities ranged from 31% to 68%, specificities from 99.1% to 99.8%, and positive predictive values from 31% to 72%. Reviewers found that coding errors accounted for some PSI-VASQIP disagreement; some disagreement was also the result of differences in AE definitions. These results suggest that the PSIs have moderate criterion validity; however, some surgical PSIs detect different AEs than VASQIP. Future research should explore using both methods to evaluate surgical quality. Published by Elsevier Inc.
Multiuser receiver for DS-CDMA signals in multipath channels: an enhanced multisurface method.
Mahendra, Chetan; Puthusserypady, Sadasivan
2006-11-01
This paper deals with the problem of multiuser detection in direct-sequence code-division multiple-access (DS-CDMA) systems in multipath environments. The existing multiuser detectors can be divided into two categories: (1) low-complexity poor-performance linear detectors and (2) high-complexity good-performance nonlinear detectors. In particular, in channels where the orthogonality of the code sequences is destroyed by multipath, detectors with linear complexity perform much worse than the nonlinear detectors. In this paper, we propose an enhanced multisurface method (EMSM) for multiuser detection in multipath channels. EMSM is an intermediate piecewise linear detection scheme with a run-time complexity linear in the number of users. Its bit error rate performance is compared with existing linear detectors, a nonlinear radial basis function detector trained by the new support vector learning algorithm, and Verdu's optimal detector. Simulations in multipath channels, for both synchronous and asynchronous cases, indicate that it always outperforms all other linear detectors, performing nearly as well as nonlinear detectors.
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
Concatenated coding for low date rate space communications.
NASA Technical Reports Server (NTRS)
Chen, C. H.
1972-01-01
In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.
Chen, Meili; Hu, Yibo; Liu, Jingxing; Wu, Qi; Zhang, Chenglin; Yu, Jun; Xiao, Jingfa; Wei, Fuwen; Wu, Jiayan
2015-12-11
High-quality and complete gene models are the basis of whole genome analyses. The giant panda (Ailuropoda melanoleuca) genome was the first genome sequenced on the basis of solely short reads, but the genome annotation had lacked the support of transcriptomic evidence. In this study, we applied RNA-seq to globally improve the genome assembly completeness and to detect novel expressed transcripts in 12 tissues from giant pandas, by using a transcriptome reconstruction strategy that combined reference-based and de novo methods. Several aspects of genome assembly completeness in the transcribed regions were effectively improved by the de novo assembled transcripts, including genome scaffolding, the detection of small-size assembly errors, the extension of scaffold/contig boundaries, and gap closure. Through expression and homology validation, we detected three groups of novel full-length protein-coding genes. A total of 12.62% of the novel protein-coding genes were validated by proteomic data. GO annotation analysis showed that some of the novel protein-coding genes were involved in pigmentation, anatomical structure formation and reproduction, which might be related to the development and evolution of the black-white pelage, pseudo-thumb and delayed embryonic implantation of giant pandas. The updated genome annotation will help further giant panda studies from both structural and functional perspectives.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
Accumulate-Repeat-Accumulate-Accumulate Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy
2007-01-01
Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.
Ultrahigh Error Threshold for Surface Codes with Biased Noise
NASA Astrophysics Data System (ADS)
Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.
2018-02-01
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
Death Certification Errors and the Effect on Mortality Statistics.
McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth
Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P < .001). Certificates for deaths occurring in hospitals were more likely to have major errors than certificates for deaths occurring at a private residence (59% vs 39%, P < .001). A total of 580 (93%) death certificates had a change in ICD-10 codes between the original and mock certificates, of which 348 (60%) had a change in the underlying cause-of-death code. Error rates on death certificates in Vermont are high and extend to ICD-10 coding, thereby affecting national mortality statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.
Controlling qubit drift by recycling error correction syndromes
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin
2015-03-01
Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE
Digital Systems Validation Handbook. Volume 2. Chapter 18. Avionic Data Bus Integration Technology
1993-11-01
interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion software, which make up digital...1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error detection and...formulate all the significant behavior of a system. MULTIVERSION PROGRAMMING. N-version programming. N-VERSION PROGRAMMING. The independent coding of a
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
Moments of inclination error distribution computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1992-01-01
The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.
Observations on Polar Coding with CRC-Aided List Decoding
2016-09-01
9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector
Fault-tolerance thresholds for the surface code with fabrication errors
NASA Astrophysics Data System (ADS)
Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.
2017-10-01
The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a nontrivial topology such that the quantum information is encoded in the global degrees of freedom (i.e., the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors—permanent faulty components such as missing physical qubits or failed entangling gates—introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the test bed. A known approach to mitigate defective lattices involves the use of primitive swap gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or use of swap gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect-matching decoder. Our numerical analysis is most applicable to two-dimensional chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8 % qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1 % .
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1995-01-01
This report focuses on the results obtained during the PI's recent sabbatical leave at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, from January 1, 1995 through June 30, 1995. Two projects investigated various properties of TURBO codes, a new form of concatenated coding that achieves near channel capacity performance at moderate bit error rates. The performance of TURBO codes is explained in terms of the code's distance spectrum. These results explain both the near capacity performance of the TURBO codes and the observed 'error floor' for moderate and high signal-to-noise ratios (SNR's). A semester project, entitled 'The Realization of the Turbo-Coding System,' involved a thorough simulation study of the performance of TURBO codes and verified the results claimed by previous authors. A copy of the final report for this project is included as Appendix A. A diploma project, entitled 'On the Free Distance of Turbo Codes and Related Product Codes,' includes an analysis of TURBO codes and an explanation for their remarkable performance. A copy of the final report for this project is included as Appendix B.
On the error statistics of Viterbi decoding and the performance of concatenated codes
NASA Technical Reports Server (NTRS)
Miller, R. L.; Deutsch, L. J.; Butman, S. A.
1981-01-01
Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.
Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage
NASA Astrophysics Data System (ADS)
Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo
2005-01-01
Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.
Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal
NASA Astrophysics Data System (ADS)
Zamudio, Gabriel S.; José, Marco V.
2018-03-01
In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.
New coding advances for deep space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.
1987-01-01
Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E
2015-01-01
Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.
Experimental implementation of the Bacon-Shor code with 10 entangled photons
NASA Astrophysics Data System (ADS)
Gimeno-Segovia, Mercedes; Sanders, Barry C.
The number of qubits that can be effectively controlled in quantum experiments is growing, reaching a regime where small quantum error-correcting codes can be tested. The Bacon-Shor code is a simple quantum code that protects against the effect of an arbitrary single-qubit error. In this work, we propose an experimental implementation of said code in a post-selected linear optical setup, similar to the recently reported 10-photon GHZ generation experiment. In the procedure we propose, an arbitrary state is encoded into the protected Shor code subspace, and after undergoing a controlled single-qubit error, is successfully decoded. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center(NSF Grant PHY-1125565) with support of the Moore Foundation(GBMF-2644).
A Framework for Identifying and Classifying Undergraduate Student Proof Errors
ERIC Educational Resources Information Center
Strickland, S.; Rand, B.
2016-01-01
This paper describes a framework for identifying, classifying, and coding student proofs, modified from existing proof-grading rubrics. The framework includes 20 common errors, as well as categories for interpreting the severity of the error. The coding scheme is intended for use in a classroom context, for providing effective student feedback. In…
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-11-18
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-01-01
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581
Self-recovery fragile watermarking algorithm based on SPHIT
NASA Astrophysics Data System (ADS)
Xin, Li Ping
2015-12-01
A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.
Robust pattern decoding in shape-coded structured light
NASA Astrophysics Data System (ADS)
Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai
2017-09-01
Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
Data acquisition and path selection decision making for an autonomous roving vehicle
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Shen, C. N.; Yerazunis, S. W.
1976-01-01
Problems related to the guidance of an autonomous rover for unmanned planetary exploration were investigated. Topics included in these studies were: simulation on an interactive graphics computer system of the Rapid Estimation Technique for detection of discrete obstacles; incorporation of a simultaneous Bayesian estimate of states and inputs in the Rapid Estimation Scheme; development of methods for estimating actual laser rangefinder errors and their application to date provided by Jet Propulsion Laboratory; and modification of a path selection system simulation computer code for evaluation of a hazard detection system based on laser rangefinder data.
Measurement-free implementations of small-scale surface codes for quantum-dot qubits
NASA Astrophysics Data System (ADS)
Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.
2018-01-01
The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Error rates and resource overheads of encoded three-qubit gates
NASA Astrophysics Data System (ADS)
Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.
2017-10-01
A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.
Reinventing radiology reimbursement.
Marshall, John; Adema, Denise
2005-01-01
Lee Memorial Health System (LMHS), located in southwest Florida, consists of 5 hospitals, a home health agency, a skilled nursing facility, multiple outpatient centers, walk-in medical centers, and primary care physician offices. LMHS annually performs more than 300,000 imaging procedures with gross imaging revenues exceeding dollar 350 million. In fall 2002, LMHS received the results of an independent audit of its IR coding. The overall IR coding error rate was determined to be 84.5%. The projected net financial impact of these errors was an annual reimbursement loss of dollar 182,000. To address the issues of coding errors and reimbursement loss, LMHS implemented its clinical reimbursementspecialist (CRS) system in October 2003, as an extension of financial services' reimbursement division. LMHS began with CRSs in 3 service lines: emergency department, cardiac catheterization, and radiology. These 3 CRSs coordinate all facets of their respective areas' chargemaster, patient charges, coding, and reimbursement functions while serving as a resident coding expert within their clinical areas. The radiology reimbursement specialist (RRS) combines an experienced radiologic technologist, interventional technologist, medical records coder, financial auditor, reimbursement specialist, and biller into a single position. The RRS's radiology experience and technologist knowledge are key assets to resolving coding conflicts and handling complex interventional coding. In addition, performing a daily charge audit and an active code review are essential if an organization is to eliminate coding errors. One of the inherent effects of eliminating coding errors is the capturing of additional RVUs and units of service. During its first year, based on account level detail, the RRS system increased radiology productivity through the additional capture of just more than 3,000 RVUs and 1,000 additional units of service. In addition, the physicians appreciate having someone who "keeps up with all the coding changes" and looks out for the charges. By assisting a few physicians' staff with coding questions, providing coding updates, and allowing them to sit in on educational sessions, at least 2 physicians have transferred some their volume to LMHS from a competitor. The provision of a "clean account," without coding errors, allows the biller to avoid the rework and billing delays caused by coding issues. During the first quarter of the RRS system, the billers referred an average of 9 accounts per day for coding resolution. During the fourth quarter of the system, these referrals were reduced to less than one per day. Prior to the RRS system, resolving these issues took an average of 4 business days. Now the conflicts are resolved within 24 hours.
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Reed-Solomon error-correction as a software patch mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pendley, Kevin D.
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
Colour coding for blood collection tube closures - a call for harmonisation.
Simundic, Ana-Maria; Cornes, Michael P; Grankvist, Kjell; Lippi, Giuseppe; Nybo, Mads; Ceriotti, Ferruccio; Theodorsson, Elvar; Panteghini, Mauro
2015-02-01
At least one in 10 patients experience adverse events while receiving hospital care. Many of the errors are related to laboratory diagnostics. Efforts to reduce laboratory errors over recent decades have primarily focused on the measurement process while pre- and post-analytical errors including errors in sampling, reporting and decision-making have received much less attention. Proper sampling and additives to the samples are essential. Tubes and additives are identified not only in writing on the tubes but also by the colour of the tube closures. Unfortunately these colours have not been standardised, running the risk of error when tubes from one manufacturer are replaced by the tubes from another manufacturer that use different colour coding. EFLM therefore supports the worldwide harmonisation of the colour coding for blood collection tube closures and labels in order to reduce the risk of pre-analytical errors and improve the patient safety.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
The Impact of Bar Code Medication Administration Technology on Reported Medication Errors
ERIC Educational Resources Information Center
Holecek, Andrea
2011-01-01
The use of bar-code medication administration technology is on the rise in acute care facilities in the United States. The technology is purported to decrease medication errors that occur at the point of administration. How significantly this technology affects actual rate and severity of error is unknown. This descriptive, longitudinal research…
RELAP5-3D Resolution of Known Restart/Backup Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesina, George L.; Anderson, Nolan A.
2014-12-01
The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
MS-READ: Quantitative measurement of amino acid incorporation.
Mohler, Kyle; Aerni, Hans-Rudolf; Gassaway, Brandon; Ling, Jiqiang; Ibba, Michael; Rinehart, Jesse
2017-11-01
Ribosomal protein synthesis results in the genetically programmed incorporation of amino acids into a growing polypeptide chain. Faithful amino acid incorporation that accurately reflects the genetic code is critical to the structure and function of proteins as well as overall proteome integrity. Errors in protein synthesis are generally detrimental to cellular processes yet emerging evidence suggest that proteome diversity generated through mistranslation may be beneficial under certain conditions. Cumulative translational error rates have been determined at the organismal level, however codon specific error rates and the spectrum of misincorporation errors from system to system remain largely unexplored. In particular, until recently technical challenges have limited the ability to detect and quantify comparatively rare amino acid misincorporation events, which occur orders of magnitude less frequently than canonical amino acid incorporation events. We now describe a technique for the quantitative analysis of amino acid incorporation that provides the sensitivity necessary to detect mistranslation events during translation of a single codon at frequencies as low as 1 in 10,000 for all 20 proteinogenic amino acids, as well as non-proteinogenic and modified amino acids. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.
Microcomputer Applications in Power and Propulsion Systems.
1981-03-01
WAKERLY John "Error detecting codes self-checking circuits and applications ". The computer science library - NORTH HOLLAND NY A1 9-t FULL AUTHORITY DIGITAL...7 AD-A09 267 ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT-ETC F/6 9/2 MAR 81ICROCONPUTER APPLICATIONS IN POWER AND PROPULSION SYSTEMS.(U...SERIES No. 113 Microcomputer Applications in Power and Propulsion Systems This document has been approved for public releasea and sale, ift
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
Resource allocation for error resilient video coding over AWGN using optimization approach.
An, Cheolhong; Nguyen, Truong Q
2008-12-01
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzpatrick, Richard
2007-09-24
Dr. Fitzpatrick has written an MHD code in order to investigate the interaction of tearing modes with flow and external magnetic perturbations, which has been successfully benchmarked against both linear and nonlinear theory and used to investigate error-field penetration in flowing plasmas. The same code was used to investigate the so-called Taylor problem. He employed the University of Chicago's FLASH code to further investigate the Taylor problem, discovering a new aspect of the problem. Dr. Fitzpatrick has written a 2-D Hall MHD code and used it to investigate the collisionless Taylor problem. Dr. Waelbroeck has performed an investigation of themore » scaling of the error-field penetration threshold in collisionless plasmas. Paul Watson and Dr. Fitzpatrick have written a fully-implicit extended-MHD code using the PETSC framework. Five publications have resulted from this grant work.« less
Moreira, Maria E; Hernandez, Caleb; Stevens, Allen D; Jones, Seth; Sande, Margaret; Blumen, Jason R; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S
2015-08-01
The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak
2014-11-01
Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the companion paper to this article. Finally, our MNA detection algorithm is real-time realizable as the computational speed on the 7-s PPG data segment was found to be only 7 ms with a Matlab code.
Analysis on the optical aberration effect on spectral resolution of coded aperture spectroscopy
NASA Astrophysics Data System (ADS)
Hao, Peng; Chi, Mingbo; Wu, Yihui
2017-10-01
The coded aperture spectrometer can achieve high throughput and high spectral resolution by replacing the traditional single slit with two-dimensional array slits manufactured by MEMS technology. However, the sampling accuracy of coding spectrum image will be distorted due to the existence of system aberrations, machining error, fixing errors and so on, resulting in the declined spectral resolution. The influence factor of the spectral resolution come from the decode error, the spectral resolution of each column, and the column spectrum offset correction. For the Czerny-Turner spectrometer, the spectral resolution of each column most depend on the astigmatism, in this coded aperture spectroscopy, the uncorrected astigmatism does result in degraded performance. Some methods must be used to reduce or remove the limiting astigmatism. The curvature of field and the spectral curvature can be result in the spectrum revision errors.
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system
NASA Astrophysics Data System (ADS)
Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.
2017-11-01
Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage
Taylor, Jennifer A; Gerwin, Daniel; Morlock, Laura; Miller, Marlene R
2011-12-01
To evaluate the need for triangulating case-finding tools in patient safety surveillance. This study applied four case-finding tools to error-associated patient safety events to identify and characterise the spectrum of events captured by these tools, using puncture or laceration as an example for in-depth analysis. Retrospective hospital discharge data were collected for calendar year 2005 (n=48,418) from a large, urban medical centre in the USA. The study design was cross-sectional and used data linkage to identify the cases captured by each of four case-finding tools. Three case-finding tools (International Classification of Diseases external (E) and nature (N) of injury codes, Patient Safety Indicators (PSI)) were applied to the administrative discharge data to identify potential patient safety events. The fourth tool was Patient Safety Net, a web-based voluntary patient safety event reporting system. The degree of mutual exclusion among detection methods was substantial. For example, when linking puncture or laceration on unique identifiers, out of 447 potential events, 118 were identical between PSI and E-codes, 152 were identical between N-codes and E-codes and 188 were identical between PSI and N-codes. Only 100 events that were identified by PSI, E-codes and N-codes were identical. Triangulation of multiple tools through data linkage captures potential patient safety events most comprehensively. Existing detection tools target patient safety domains differently, and consequently capture different occurrences, necessitating the integration of data from a combination of tools to fully estimate the total burden.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
NASA Astrophysics Data System (ADS)
Horii, Steven C.; Kim, Woojin; Boonn, William; Iyoob, Christopher; Maston, Keith; Coleman, Beverly G.
2011-03-01
When the first quarter of 2010 Department of Radiology statistics were provided to the Section Chiefs, the authors (SH, BC) were alarmed to discover that Ultrasound showed a decrease of 2.5 percent in billed examinations. This seemed to be in direct contradistinction to the experience of the ultrasound faculty members and sonographers. Their experience was that they were far busier than during the same quarter of 2009. The one exception that all acknowledged was the month of February, 2010 when several major winter storms resulted in a much decreased Hospital admission and Emergency Department visit rate. Since these statistics in part help establish priorities for capital budget items, professional and technical staffing levels, and levels of incentive salary, they are taken very seriously. The availability of a desktop, Web-based RIS database search tool developed by two of the authors (WK, WB) and built-in database functions of the ultrasound miniPACS, made it possible for us very rapidly to develop and test hypotheses for why the number of billable examinations was declining in the face of what experience told the authors was an increasing number of examinations being performed. Within a short time, we identified the major cause as errors on the part of the company retained to verify billable Current Procedural Terminology (CPT) codes against ultrasound reports. This information is being used going forward to recover unbilled examinations and take measures to reduce or eliminate the types of coding errors that resulted in the problem.
Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong
2010-12-20
In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.
NASA Astrophysics Data System (ADS)
Flanagan, S.; Schachter, J. M.; Schissel, D. P.
2001-10-01
A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility. The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the stored energy from integrating the measured kinetic profiles to that calculated from magnetic measurements by EFIT. This new system also tracks the progress of MDSplus dispatching of software for data analysis and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, Clips to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse. A demonstration of this system including a simulated DIII-D pulse cycle will be presented.
Application of human reliability analysis to nursing errors in hospitals.
Inoue, Kayoko; Koizumi, Akio
2004-12-01
Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
Permutation coding technique for image recognition systems.
Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel
2006-11-01
A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.
Method for computing self-consistent solution in a gun code
Nelson, Eric M
2014-09-23
Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.
An extended Reed Solomon decoder design
NASA Technical Reports Server (NTRS)
Chen, J.; Owsley, P.; Purviance, J.
1991-01-01
It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.
Errors from approximation of ODE systems with reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
2016-12-30
This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.
Global navigation satellite system receiver for weak signals under all dynamic conditions
NASA Astrophysics Data System (ADS)
Ziedan, Nesreen Ibrahim
The ability of the Global Navigation Satellite System (GNSS) receiver to work under weak signal and various dynamic conditions is required in some applications. For example, to provide a positioning capability in wireless devices, or orbit determination of Geostationary and high Earth orbit satellites. This dissertation develops Global Positioning System (GPS) receiver algorithms for such applications. Fifteen algorithms are developed for the GPS C/A signal. They cover all the receiver main functions, which include acquisition, fine acquisition, bit synchronization, code and carrier tracking, and navigation message decoding. They are integrated together, and they can be used in any software GPS receiver. They also can be modified to fit any other GPS or GNSS signals. The algorithms have new capabilities. The processing and memory requirements are considered in the design to allow the algorithms to fit the limited resources of some applications; they do not require any assisting information. Weak signals can be acquired in the presence of strong interfering signals and under high dynamic conditions. The fine acquisition, bit synchronization, and tracking algorithms are based on the Viterbi algorithm and Extended Kalman filter approaches. The tracking algorithms capabilities increase the time to lose lock. They have the ability to adaptively change the integration length and the code delay separation. More than one code delay separation can be used in the same time. Large tracking errors can be detected and then corrected by a re-initialization and an acquisition-like algorithms. Detecting the navigation message is needed to increase the coherent integration; decoding it is needed to calculate the navigation solution. The decoding algorithm utilizes the message structure to enable its decoding for signals with high Bit Error Rate. The algorithms are demonstrated using simulated GPS C/A code signals, and TCXO clocks. The results have shown the algorithms ability to reliably work with 15 dB-Hz signals and acceleration over 6 g.
Performance analysis of optical wireless communication system based on two-fold turbo code
NASA Astrophysics Data System (ADS)
Chen, Jun; Huang, Dexiu; Yuan, Xiuhua
2005-11-01
Optical wireless communication (OWC) is beginning to emerge in the telecommunications market as a strategy to meet last-mile demand owing to its unique combination of features. Turbo codes have an impressive near Shannon-limit error correcting performance. Twofold turbo codes have been recently introduced as the least complex member of the multifold turbo code family. In this paper, at first, we present the mathematical model of signal and optical wireless channel with fading and bit error rate model with scintillation, then we provide a new turbo code method to use in OWC system, we can obtain a better BER curse of OWC system with twofold turbo code than with common turbo code.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Predicted blood glucose from insulin administration based on values from miscoded glucose meters.
Raine, Charles H; Pardo, Scott; Parkes, Joan Lee
2008-07-01
The proper use of many types of self-monitored blood glucose (SMBG) meters requires calibration to match strip code. Studies have demonstrated the occurrence and impact on insulin dose of coding errors with SMBG meters. This paper reflects additional analyses performed with data from Raine et al. (JDST, 2:205-210, 2007). It attempts to relate potential insulin dose errors to possible adverse blood glucose outcomes when glucose meters are miscoded. Five sets of glucose meters were used. Two sets of meters were autocoded and therefore could not be miscoded, and three sets required manual coding. Two of each set of manually coded meters were deliberately miscoded, and one from each set was properly coded. Subjects (n = 116) had finger stick blood glucose obtained at fasting, as well as at 1 and 2 hours after a fixed meal (Boost((R)); Novartis Medical Nutrition U.S., Basel, Switzerland). Deviations of meter blood glucose results from the reference method (YSI) were used to predict insulin dose errors and resultant blood glucose outcomes based on these deviations. Using insulin sensitivity data, it was determined that, given an actual blood glucose of 150-400 mg/dl, an error greater than +40 mg/dl would be required to calculate an insulin dose sufficient to produce a blood glucose of less than 70 mg/dl. Conversely, an error less than or equal to -70 mg/dl would be required to derive an insulin dose insufficient to correct an elevated blood glucose to less than 180 mg/dl. For miscoded meters, the estimated probability to produce a blood glucose reduction to less than or equal to 70 mg/dl was 10.40%. The corresponding probabilities for autocoded and correctly coded manual meters were 2.52% (p < 0.0001) and 1.46% (p < 0.0001), respectively. Furthermore, the errors from miscoded meters were large enough to produce a calculated blood glucose outcome less than or equal to 50 mg/dl in 42 of 833 instances. Autocoded meters produced zero (0) outcomes less than or equal to 50 mg/dl out of 279 instances, and correctly coded manual meters produced 1 of 416. Improperly coded blood glucose meters present the potential for insulin dose errors and resultant clinically significant hypoglycemia or hyperglycemia. Patients should be instructed and periodically reinstructed in the proper use of blood glucose meters, particularly for meters that require coding.
Direct-detection Free-space Laser Transceiver Test-bed
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Chen, Jeffrey R.; Dabney, Philip W.; Ferrara, Jeffrey F.; Fong, Wai H.; Martino, Anthony J.; McGarry Jan. F.; Merkowitz, Stephen M.; Principe, Caleb M.; Sun, Siaoli;
2008-01-01
NASA Goddard Space Flight Center is developing a direct-detection free-space laser communications transceiver test bed. The laser transmitter is a master-oscillator power amplifier (MOPA) configuration using a 1060 nm wavelength laser-diode with a two-stage multi-watt Ytterbium fiber amplifier. Dual Mach-Zehnder electro-optic modulators provide an extinction ratio greater than 40 dB. The MOPA design delivered 10-W average power with low-duty-cycle PPM waveforms and achieved 1.7 kW peak power. We use pulse-position modulation format with a pseudo-noise code header to assist clock recovery and frame boundary identification. We are examining the use of low-density-parity-check (LDPC) codes for forward error correction. Our receiver uses an InGaAsP 1 mm diameter photocathode hybrid photomultiplier tube (HPMT) cooled with a thermo-electric cooler. The HPMT has 25% single-photon detection efficiency at 1064 nm wavelength with a dark count rate of 60,000/s at -22 degrees Celsius and a single-photon impulse response of 0.9 ns. We report on progress toward demonstrating a combined laser communications and ranging field experiment.
RNAcode: Robust discrimination of coding and noncoding regions in comparative sequence data
Washietl, Stefan; Findeiß, Sven; Müller, Stephan A.; Kalkhof, Stefan; von Bergen, Martin; Hofacker, Ivo L.; Stadler, Peter F.; Goldman, Nick
2011-01-01
With the availability of genome-wide transcription data and massive comparative sequencing, the discrimination of coding from noncoding RNAs and the assessment of coding potential in evolutionarily conserved regions arose as a core analysis task. Here we present RNAcode, a program to detect coding regions in multiple sequence alignments that is optimized for emerging applications not covered by current protein gene-finding software. Our algorithm combines information from nucleotide substitution and gap patterns in a unified framework and also deals with real-life issues such as alignment and sequencing errors. It uses an explicit statistical model with no machine learning component and can therefore be applied “out of the box,” without any training, to data from all domains of life. We describe the RNAcode method and apply it in combination with mass spectrometry experiments to predict and confirm seven novel short peptides in Escherichia coli and to analyze the coding potential of RNAs previously annotated as “noncoding.” RNAcode is open source software and available for all major platforms at http://wash.github.com/rnacode. PMID:21357752
RNAcode: robust discrimination of coding and noncoding regions in comparative sequence data.
Washietl, Stefan; Findeiss, Sven; Müller, Stephan A; Kalkhof, Stefan; von Bergen, Martin; Hofacker, Ivo L; Stadler, Peter F; Goldman, Nick
2011-04-01
With the availability of genome-wide transcription data and massive comparative sequencing, the discrimination of coding from noncoding RNAs and the assessment of coding potential in evolutionarily conserved regions arose as a core analysis task. Here we present RNAcode, a program to detect coding regions in multiple sequence alignments that is optimized for emerging applications not covered by current protein gene-finding software. Our algorithm combines information from nucleotide substitution and gap patterns in a unified framework and also deals with real-life issues such as alignment and sequencing errors. It uses an explicit statistical model with no machine learning component and can therefore be applied "out of the box," without any training, to data from all domains of life. We describe the RNAcode method and apply it in combination with mass spectrometry experiments to predict and confirm seven novel short peptides in Escherichia coli and to analyze the coding potential of RNAs previously annotated as "noncoding." RNAcode is open source software and available for all major platforms at http://wash.github.com/rnacode.
Transmission and storage of medical images with patient information.
Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo
2003-07-01
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
Neural Decoder for Topological Codes
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Melko, Roger G.
2017-07-01
We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
Trace-shortened Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Solomon, G.
1994-01-01
Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia
2018-01-01
Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.
Confusion—specimen mix-up in dermatopathology and measures to prevent and detect it
Weyers, Wolfgang
2014-01-01
Maintaining patient identity throughout the biopsy pathway is critical for the practice of dermatology and dermatopathology. From the biopsy procedure to the acquisition of the pathology report, a specimen may pass through the hands of more than twenty individuals in several workplaces. The risk of a mix-up is considerable and may account for more serious mistakes than diagnostic errors. To prevent specimen mix-up, work processes should be standardized and automated wherever possible, e.g., by strict order in the operating room and in the laboratory and by adoption of a bar code system to identify specimens and corresponding request forms. Mutual control of clinicians, technicians, histopathologists, and secretaries, both simultaneously and downstream, is essential to detect errors. The most vulnerable steps of the biopsy pathway, namely, labeling of specimens and request forms and accessioning of biopsy specimens in the laboratory, should be carried out by two persons simultaneously. In preceding work steps, clues must be provided that allow a mix-up to be detected later on, such as information about clinical diagnosis, biopsy technique, and biopsy site by the clinician, and a sketch of the specimen by the technician grossing it. Awareness of the danger of specimen mix-up is essential for preventing and detecting it. The awareness can be heightened by documentation of any error in the biopsy pathway. In case of suspicion, a mix-up of specimens from different patients can be confirmed by DNA analysis. PMID:24520511
Confusion-specimen mix-up in dermatopathology and measures to prevent and detect it.
Weyers, Wolfgang
2014-01-01
Maintaining patient identity throughout the biopsy pathway is critical for the practice of dermatology and dermatopathology. From the biopsy procedure to the acquisition of the pathology report, a specimen may pass through the hands of more than twenty individuals in several workplaces. The risk of a mix-up is considerable and may account for more serious mistakes than diagnostic errors. To prevent specimen mix-up, work processes should be standardized and automated wherever possible, e.g., by strict order in the operating room and in the laboratory and by adoption of a bar code system to identify specimens and corresponding request forms. Mutual control of clinicians, technicians, histopathologists, and secretaries, both simultaneously and downstream, is essential to detect errors. The most vulnerable steps of the biopsy pathway, namely, labeling of specimens and request forms and accessioning of biopsy specimens in the laboratory, should be carried out by two persons simultaneously. In preceding work steps, clues must be provided that allow a mix-up to be detected later on, such as information about clinical diagnosis, biopsy technique, and biopsy site by the clinician, and a sketch of the specimen by the technician grossing it. Awareness of the danger of specimen mix-up is essential for preventing and detecting it. The awareness can be heightened by documentation of any error in the biopsy pathway. In case of suspicion, a mix-up of specimens from different patients can be confirmed by DNA analysis.
COMPOSE-HPC: A Transformational Approach to Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernholdt, David E; Allan, Benjamin A.; Armstrong, Robert C.
2012-04-01
The goal of the COMPOSE-HPC project is to 'democratize' tools for automatic transformation of program source code so that it becomes tractable for the developers of scientific applications to create and use their own transformations reliably and safely. This paper describes our approach to this challenge, the creation of the KNOT tool chain, which includes tools for the creation of annotation languages to control the transformations (PAUL), to perform the transformations (ROTE), and optimization and code generation (BRAID), which can be used individually and in combination. We also provide examples of current and future uses of the KNOT tools, whichmore » include transforming code to use different programming models and environments, providing tests that can be used to detect errors in software or its execution, as well as composition of software written in different programming languages, or with different threading patterns.« less
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems
NASA Astrophysics Data System (ADS)
Wang, Shaowei; Ji, Xiaoyong
Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.
The application of LDPC code in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao
2018-03-01
The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.
"ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"
DOE Office of Scientific and Technical Information (OSTI.GOV)
SANTHI, NANDAKISHORE
We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Error suppression for Hamiltonian quantum computing in Markovian environments
NASA Astrophysics Data System (ADS)
Marvian, Milad; Lidar, Daniel A.
2017-03-01
Hamiltonian quantum computing, such as the adiabatic and holonomic models, can be protected against decoherence using an encoding into stabilizer subspace codes for error detection and the addition of energy penalty terms. This method has been widely studied since it was first introduced by Jordan, Farhi, and Shor (JFS) in the context of adiabatic quantum computing. Here, we extend the original result to general Markovian environments, not necessarily in Lindblad form. We show that the main conclusion of the original JFS study holds under these general circumstances: Assuming a physically reasonable bath model, it is possible to suppress the initial decay out of the encoded ground state with an energy penalty strength that grows only logarithmically in the system size, at a fixed temperature.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
Joint Dictionary Learning for Multispectral Change Detection.
Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao
2017-04-01
Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.
Automated Weight-Window Generation for Threat Detection Applications Using ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, Scott W; Miller, Thomas Martin; Evans, Thomas M
2009-01-01
Deterministic transport codes have been used for some time to generate weight-window parameters that can improve the efficiency of Monte Carlo simulations. As the use of this hybrid computational technique is becoming more widespread, the scope of applications in which it is being applied is expanding. An active source of new applications is the field of homeland security--particularly the detection of nuclear material threats. For these problems, automated hybrid methods offer an efficient alternative to trial-and-error variance reduction techniques (e.g., geometry splitting or the stochastic weight window generator). The ADVANTG code has been developed to automate the generation of weight-windowmore » parameters for MCNP using the Consistent Adjoint Driven Importance Sampling method and employs the TORT or Denovo 3-D discrete ordinates codes to generate importance maps. In this paper, we describe the application of ADVANTG to a set of threat-detection simulations. We present numerical results for an 'active-interrogation' problem in which a standard cargo container is irradiated by a deuterium-tritium fusion neutron generator. We also present results for two passive detection problems in which a cargo container holding a shielded neutron or gamma source is placed near a portal monitor. For the passive detection problems, ADVANTG obtains an O(10{sup 4}) speedup and, for a detailed gamma spectrum tally, an average O(10{sup 2}) speedup relative to implicit-capture-only simulations, including the deterministic calculation time. For the active-interrogation problem, an O(10{sup 4}) speedup is obtained when compared to a simulation with angular source biasing and crude geometry splitting.« less
ERIC Educational Resources Information Center
Rice, Bart F.; Wilde, Carroll O.
It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…
Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree
NASA Astrophysics Data System (ADS)
Zheng, Amin; Cheung, Gene; Florencio, Dinei
2018-07-01
With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.
PDB file parser and structure class implemented in Python.
Hamelryck, Thomas; Manderick, Bernard
2003-11-22
The biopython project provides a set of bioinformatics tools implemented in Python. Recently, biopython was extended with a set of modules that deal with macromolecular structure. Biopython now contains a parser for PDB files that makes the atomic information available in an easy-to-use but powerful data structure. The parser and data structure deal with features that are often left out or handled inadequately by other packages, e.g. atom and residue disorder (if point mutants are present in the crystal), anisotropic B factors, multiple models and insertion codes. In addition, the parser performs some sanity checking to detect obvious errors. The Biopython distribution (including source code and documentation) is freely available (under the Biopython license) from http://www.biopython.org
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
An Experiment in Scientific Code Semantic Analysis
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.
1998-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
The use of automatic programming techniques for fault tolerant computing systems
NASA Technical Reports Server (NTRS)
Wild, C.
1985-01-01
It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
A /31,15/ Reed-Solomon Code for large memory systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps: (1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation, and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for decoding Steps 2, 3 and 4.
Entanglement renormalization, quantum error correction, and bulk causality
NASA Astrophysics Data System (ADS)
Kim, Isaac H.; Kastoryano, Michael J.
2017-04-01
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
NASA Astrophysics Data System (ADS)
Jos, Sujit; Kumar, Preetam; Chakrabarti, Saswat
Orthogonal and quasi-orthogonal codes are integral part of any DS-CDMA based cellular systems. Orthogonal codes are ideal for use in perfectly synchronous scenario like downlink cellular communication. Quasi-orthogonal codes are preferred over orthogonal codes in the uplink communication where perfect synchronization cannot be achieved. In this paper, we attempt to compare orthogonal and quasi-orthogonal codes in presence of timing synchronization error. This will give insight into the synchronization demands in DS-CDMA systems employing the two classes of sequences. The synchronization error considered is smaller than chip duration. Monte-Carlo simulations have been carried out to verify the analytical and numerical results.
CADNA_C: A version of CADNA for use with C or C++ programs
NASA Astrophysics Data System (ADS)
Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne
2010-11-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
NASA Astrophysics Data System (ADS)
Liebovitch, Larry
1998-03-01
The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
Reardon, Joseph M; Harmon, Katherine J; Schult, Genevieve C; Staton, Catherine A; Waller, Anna E
2016-02-08
Although fatal opioid poisonings tripled from 1999 to 2008, data describing nonfatal poisonings are rare. Public health authorities are in need of tools to track opioid poisonings in near real time. We determined the utility of ICD-9-CM diagnosis codes for identifying clinically significant opioid poisonings in a state-wide emergency department (ED) surveillance system. We sampled visits from four hospitals from July 2009 to June 2012 with diagnosis codes of 965.00, 965.01, 965.02 and 965.09 (poisoning by opiates and related narcotics) and/or an external cause of injury code of E850.0-E850.2 (accidental poisoning by opiates and related narcotics), and developed a novel case definition to determine in which cases opioid poisoning prompted the ED visit. We calculated the percentage of visits coded for opioid poisoning that were clinically significant and compared it to the percentage of visits coded for poisoning by non-opioid agents in which there was actually poisoning by an opioid agent. We created a multivariate regression model to determine if other collected triage data can improve the positive predictive value of diagnosis codes alone for detecting clinically significant opioid poisoning. 70.1 % of visits (Standard Error 2.4 %) coded for opioid poisoning were primarily prompted by opioid poisoning. The remainder of visits represented opioid exposure in the setting of other primary diseases. Among non-opioid poisoning codes reviewed, up to 36 % were reclassified as an opioid poisoning. In multivariate analysis, only naloxone use improved the positive predictive value of ICD-9-CM codes for identifying clinically significant opioid poisoning, but was associated with a high false negative rate. This surveillance mechanism identifies many clinically significant opioid overdoses with a high positive predictive value. With further validation, it may help target control measures such as prescriber education and pharmacy monitoring.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook
2012-11-20
A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Integrated source and channel encoded digital communication system design study. [for space shuttles
NASA Technical Reports Server (NTRS)
Huth, G. K.
1976-01-01
The results of several studies Space Shuttle communication system are summarized. These tasks can be divided into the following categories: (1) phase multiplexing for two- and three-channel data transmission, (2) effects of phase noise on the performance of coherent communication links, (3) analysis of command system performance, (4) error correcting code tradeoffs, (5) signal detection and angular search procedure for the shuttle Ku-band communication system, and (6) false lock performance of Costas loop receivers.
Design of a Microprogram Control Unit with Concurrent Error Detection.
1984-08-01
I fxoot Office of Naval Research N/A N00039-80-C-0556 ta. ADDRESS (City. St.. and ZIP Cod 10. SOURCE OF FUNOING N0. -PROGRAM PROJECT TASK WORK UNIT...However, the CED concept is mainly applied to various codes data transmission, and simple functional units, such as arithmetic units. Little work has...been done in the control unit area. Previous work is primarily in the use of clanical self-checking circuits, using bit slicin& parity, and m-out-of-n
Combined methods of tolerance increasing for embedded SRAM
NASA Astrophysics Data System (ADS)
Shchigorev, L. A.; Shagurin, I. I.
2016-10-01
The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.
Visual saliency detection based on in-depth analysis of sparse representation
NASA Astrophysics Data System (ADS)
Wang, Xin; Shen, Siqiu; Ning, Chen
2018-03-01
Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.
Quantum error-correcting code for ternary logic
NASA Astrophysics Data System (ADS)
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
DNA barcode goes two-dimensions: DNA QR code web server.
Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.
[Professional error and nursing ethics: from past consideration to future strategy].
Germini, Francesco; Lattarulo, Pio
2008-01-01
In 1960, the National Federation IPASVI emanated its first ethical code which does not deal at all with the prevention of error or how to behave in the case this does happen, with the exception of point 6, which recommends scrupulously respecting the therapy prescribed and encouraging patients to trust the physicians and the other health workers. The second ethical code was dated 1977. In this eighteen year interval the hospital organization had been deeply modified and this new layout of the Code reflected some remarkable changes of thought but no precise reference to the matter of error management. In the 1999 version of the code the radical changes in the profession are reflected and formally recognized by the law (42/1999) and by society acting as a reference for the regulation of the nursing profession and referring to one of the most ancient principles of medicine, the "primum non nocere". It is important to remember that an ethical code derives from professional considerations, applied to the context of "here and now". Some strategic considerations for the future regarding the important role of risk prevention and management of errors (which do, unfortunately, occur) are therefore expressed.
Lamb, Mary K; Innes, Kerry; Saad, Patricia; Rust, Julie; Dimitropoulos, Vera; Cumerlato, Megan
The Performance Indicators for Coding Quality (PICQ) is a data quality assessment tool developed by Australia's National Centre for Classification in Health (NCCH). PICQ consists of a number of indicators covering all ICD-10-AM disease chapters, some procedure chapters from the Australian Classification of Health Intervention (ACHI) and some Australian Coding Standards (ACS). The indicators can be used to assess the coding quality of hospital morbidity data by monitoring compliance of coding conventions and ACS; this enables the identification of particular records that may be incorrectly coded, thus providing a measure of data quality. There are 31 obstetric indicators available for the ICD-10-AM Fourth Edition. Twenty of these 31 indicators were classified as Fatal, nine as Warning and two Relative. These indicators were used to examine coding quality of obstetric records in the 2004-2005 financial year Australian national hospital morbidity dataset. Records with obstetric disease or procedure codes listed anywhere in the code string were extracted and exported from the SPSS source file. Data were then imported into a Microsoft Access database table as per PICQ instructions, and run against all Fatal and Warning and Relative (N=31) obstetric PICQ 2006 Fourth Edition Indicators v.5 for the ICD-10- AM Fourth Edition. There were 689,905 gynaecological and obstetric records in the 2004-2005 financial year, of which 1.14% were found to have triggered Fatal degree errors, 3.78% Warning degree errors and 8.35% Relative degree errors. The types of errors include completeness, redundancy, specificity and sequencing problems. It was found that PICQ is a useful initial screening tool for the assessment of ICD-10-AM/ACHI coding quality. The overall quality of codes assigned to obstetric records in the 2004- 2005 Australian national morbidity dataset is of fair quality.
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Predicted Blood Glucose from Insulin Administration Based on Values from Miscoded Glucose Meters
Raine, Charles H.; Pardo, Scott; Parkes, Joan Lee
2008-01-01
Objectives The proper use of many types of self-monitored blood glucose (SMBG) meters requires calibration to match strip code. Studies have demonstrated the occurrence and impact on insulin dose of coding errors with SMBG meters. This paper reflects additional analyses performed with data from Raine et al. (JDST, 2:205–210, 2007). It attempts to relate potential insulin dose errors to possible adverse blood glucose outcomes when glucose meters are miscoded. Methods Five sets of glucose meters were used. Two sets of meters were autocoded and therefore could not be miscoded, and three sets required manual coding. Two of each set of manually coded meters were deliberately miscoded, and one from each set was properly coded. Subjects (n = 116) had finger stick blood glucose obtained at fasting, as well as at 1 and 2 hours after a fixed meal (Boost®; Novartis Medical Nutrition U.S., Basel, Switzerland). Deviations of meter blood glucose results from the reference method (YSI) were used to predict insulin dose errors and resultant blood glucose outcomes based on these deviations. Results Using insulin sensitivity data, it was determined that, given an actual blood glucose of 150–400 mg/dl, an error greater than +40 mg/dl would be required to calculate an insulin dose sufficient to produce a blood glucose of less than 70 mg/dl. Conversely, an error less than or equal to -70 mg/dl would be required to derive an insulin dose insufficient to correct an elevated blood glucose to less than 180 mg/dl. For miscoded meters, the estimated probability to produce a blood glucose reduction to less than or equal to 70 mg/dl was 10.40%. The corresponding probabilities for autocoded and correctly coded manual meters were 2.52% (p < 0.0001) and 1.46% (p < 0.0001), respectively. Furthermore, the errors from miscoded meters were large enough to produce a calculated blood glucose outcome less than or equal to 50 mg/dl in 42 of 833 instances. Autocoded meters produced zero (0) outcomes less than or equal to 50 mg/dl out of 279 instances, and correctly coded manual meters produced 1 of 416. Conclusions Improperly coded blood glucose meters present the potential for insulin dose errors and resultant clinically significant hypoglycemia or hyperglycemia. Patients should be instructed and periodically reinstructed in the proper use of blood glucose meters, particularly for meters that require coding. PMID:19885229
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
Stevens, Allen D.; Hernandez, Caleb; Jones, Seth; Moreira, Maria E.; Blumen, Jason R.; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S.
2016-01-01
Background Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. Methods We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded-syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28–39) seconds and 42 (95% CI: 36–51) seconds, respectively (difference = 9 [95% CI: 4–14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference = 39%, 95% CI: 13–61%). Conclusions A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. PMID:26247145
Stevens, Allen D; Hernandez, Caleb; Jones, Seth; Moreira, Maria E; Blumen, Jason R; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S
2015-11-01
Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28-39) seconds and 42 (95% CI: 36-51) seconds, respectively (difference=9 [95% CI: 4-14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference=39%, 95% CI: 13-61%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Introduction to Forward-Error-Correcting Coding
NASA Technical Reports Server (NTRS)
Freeman, Jon C.
1996-01-01
This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.
Multichannel error correction code decoder
NASA Technical Reports Server (NTRS)
Wagner, Paul K.; Ivancic, William D.
1993-01-01
A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.
Smart photodetector arrays for error control in page-oriented optical memory
NASA Astrophysics Data System (ADS)
Schaffer, Maureen Elizabeth
1998-12-01
Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
NASA Technical Reports Server (NTRS)
Radhadrishnan, Krishnan
1993-01-01
A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.
The neutral emergence of error minimized genetic codes superior to the standard genetic code.
Massey, Steven E
2016-11-07
The standard genetic code (SGC) assigns amino acids to codons in such a way that the impact of point mutations is reduced, this is termed 'error minimization' (EM). The occurrence of EM has been attributed to the direct action of selection, however it is difficult to explain how the searching of alternative codes for an error minimized code can occur via codon reassignments, given that these are likely to be disruptive to the proteome. An alternative scenario is that EM has arisen via the process of genetic code expansion, facilitated by the duplication of genes encoding charging enzymes and adaptor molecules. This is likely to have led to similar amino acids being assigned to similar codons. Strikingly, we show that if during code expansion the most similar amino acid to the parent amino acid, out of the set of unassigned amino acids, is assigned to codons related to those of the parent amino acid, then genetic codes with EM superior to the SGC easily arise. This scheme mimics code expansion via the gene duplication of charging enzymes and adaptors. The result is obtained for a variety of different schemes of genetic code expansion and provides a mechanistically realistic manner in which EM has arisen in the SGC. These observations might be taken as evidence for self-organization in the earliest stages of life. Copyright © 2016 Elsevier Ltd. All rights reserved.
The algebraic decoding of the (41, 21, 9) quadratic residue code
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Truong, T. K.; Chen, Xuemin; Yin, Xiaowei
1992-01-01
A new algebraic approach for decoding the quadratic residue (QR) codes, in particular the (41, 21, 9) QR code is presented. The key ideas behind this decoding technique are a systematic application of the Sylvester resultant method to the Newton identities associated with the code syndromes to find the error-locator polynomial, and next a method for determining error locations by solving certain quadratic, cubic and quartic equations over GF(2 exp m) in a new way which uses Zech's logarithms for the arithmetic. The algorithms developed here are suitable for implementation in a programmable microprocessor or special-purpose VLSI chip. It is expected that the algebraic methods developed here can apply generally to other codes such as the BCH and Reed-Solomon codes.
Kotchenova, Svetlana Y; Vermote, Eric F
2007-07-10
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.
2007-07-01
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
Santos, José; Monteagudo, Ángel
2017-03-27
The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.
Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, G. M.
2002-01-01
We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.
Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y
2012-12-01
Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Label consistent K-SVD: learning a discriminative dictionary for recognition.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2013-11-01
A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
Efficient RPG detection in noisy 3D image data
NASA Astrophysics Data System (ADS)
Pipitone, Frank
2011-06-01
We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.
Lasaygues, Philippe; Arciniegas, Andres; Espinosa, Luis; Prieto, Flavio; Brancheriau, Loïc
2018-05-26
Ultrasound computed tomography (USCT) using the transmission mode is a way to detect and assess the extent of decay in wood structures. The resolution of the ultrasonic image is closely related to the different anatomical features of wood. The complexity of the wave propagation process generates complex signals consisting of several wave packets with different signatures. Wave paths, depth dependencies, wave velocities or attenuations are often difficult to interpret. For this kind of assessment, the focus is generally on signal pre-processing. Several approaches have been used so far including filtering, spectrum analysis and a method involving deconvolution using a characteristic transfer function of the experimental device. However, all these approaches may be too sophisticated and/or unstable. The alternative methods proposed in this work are based on coded excitation, which makes it possible to process both local and general information available such as frequency and time parameters. Coded excitation is based on the filtering of the transmitted signal using a suitable electric input signal. The aim of the present study was to compare two coded-excitation methods, a chirp- and a wavelet-coded excitation method, to determine the time of flight of the ultrasonic wave, and to investigate the feasibility, the robustness and the precision of the measurement of geometrical and acoustical properties in laboratory conditions. To obtain control experimental data, the two methods were compared with the conventional ultrasonic pulse method. Experiments were conducted on a polyurethane resin sample and two samples of different wood species using two 500 kHz-transducers. The relative errors in the measurement of thickness compared with the results of caliper measurements ranged from 0.13% minimum for the wavelet-coded excitation method to 2.3% maximum for the chirp-coded excitation method. For the relative errors in the measurement of ultrasonic wave velocity, the coded excitation methods showed differences ranging from 0.24% minimum for the wavelet-coded excitation method to 2.62% maximum for the chirp-coded excitation method. Methods based on coded excitation algorithms thus enable accurate measurements of thickness and ultrasonic wave velocity in samples of wood species. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chung, Hye Won; Guha, Saikat; Zheng, Lizhong
2017-07-01
We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.
Phase ambiguity resolution for offset QPSK modulation systems
NASA Technical Reports Server (NTRS)
Nguyen, Tien M. (Inventor)
1991-01-01
A demodulator for Offset Quaternary Phase Shift Keyed (OQPSK) signals modulated with two words resolves eight possible combinations of phase ambiguity which may produce data error by first processing received I(sub R) and Q(sub R) data in an integrated carrier loop/symbol synchronizer using a digital Costas loop with matched filters for correcting four of eight possible phase lock errors, and then the remaining four using a phase ambiguity resolver which detects the words to not only reverse the received I(sub R) and Q(sub R) data channels, but to also invert (complement) the I(sub R) and/or Q(sub R) data, or to at least complement the I(sub R) and Q(sub R) data for systems using nontransparent codes that do not have rotation direction ambiguity.
A novel scheme to aid coherent detection of GMSK signals in fast Rayleigh fading channels
NASA Technical Reports Server (NTRS)
Leung, Patrick S. K.; Feher, Kamilo
1990-01-01
A novel scheme to insert carrier pilot to Gaussian Minimum Shift Keying (GMSK) signal using Binary Block Code (BBC) and a highpass filter in baseband is proposed. This allows the signal to be coherently demodulated even in a fast Rayleigh fading environment. As an illustrative example, the scheme is applied to a 16 kb/s GMSK signal, and its performance over a fast Rayleigh fading channel is investigated using computer simulation. This modem's 'irreducible error rate' is found to be Pe = 5.5 x 10(exp -5) which is more than that of differential detection. The modem's performance in Rician fading channel is currently under investigation.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FLANAGAN,A; SCHACHTER,J.M; SCHISSEL,D.P
2003-02-01
A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one ormore » more diagnostics, or the presence of unanticipated phenomena in the plasma. This new system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, CLIPS to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.« less
System to monitor data analyses and results of physics data validation between pulses at DIII-D
NASA Astrophysics Data System (ADS)
Flanagan, S.; Schachter, J. M.; Schissel, D. P.
2004-06-01
A data analysis monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded, thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one or more diagnostics, or the presence of unanticipated phenomena in the plasma. This system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, C Language Integrated Production System to implement expert system logic, and displays its results to multiple web clients via Hypertext Markup Language. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.
Error-Transparent Quantum Gates for Small Logical Qubit Architectures
NASA Astrophysics Data System (ADS)
Kapit, Eliot
2018-02-01
One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
Holonomic surface codes for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco
2018-02-01
Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.
Translating expert system rules into Ada code with validation and verification
NASA Technical Reports Server (NTRS)
Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam
1991-01-01
The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.
Olive, F; Gomez, F; Schott, A-M; Remontet, L; Bossard, N; Mitton, N; Polazzi, S; Colonna, M; Trombert-Paviot, B
2011-02-01
Use of French Diagnosis Related Groups (DRGs) program databases, apart from financial purposes, has recently been improved since a unique anonymous patient identification number has been created for each inpatient in administrative case mix database. Based on the work of the group for cancer epidemiological observation in the Rhône-Alpes area, (ONC-EPI group), we review the remaining difficulties in the use of DRG data for epidemiological purposes and we consider a longitudinal approach based on analysis of database over several years. We also discuss limitations of this approach. The main problems are related to a lack of quality of administrative data, especially coding of diagnoses. These errors come from missing or inappropriate codes, or not being in accordance with prioritization rules (causing an over- or under-reporting or inconsistencies in coding over time). One difficulty, partly due to the hierarchy of coding and the type of cancer, is the choice of an extraction algorithm. In two studies designed to estimate the incidence of cancer cared in hospitals (breast, colon-rectum, kidney, ovaries), a first algorithm, including a code of cancer as principal diagnosis with a selection of surgical procedures less performed than the second one including a code of cancer as principal diagnosis only, for which the number of hospitalizations per patient ratio was stable across time and space. The chaining over several years allows, by tracing the trajectory of the patient, to detect and correct inaccuracies, errors and missing values, and for incidence studies, to correct incident cases by removing prevalent cases. However, linkage, complete only since 2007, does not correct data in all cases. Ways of future improvement certainly pass through improved algorithms for case identification and especially by linking DRG data with other databases. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
Final Report: Correctness Tools for Petascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
2014-10-27
In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoringmore » of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.« less
Encoder fault analysis system based on Moire fringe error signal
NASA Astrophysics Data System (ADS)
Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu
2018-02-01
Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Ancient DNA sequence revealed by error-correcting codes.
Brandão, Marcelo M; Spoladore, Larissa; Faria, Luzinete C B; Rocha, Andréa S L; Silva-Filho, Marcio C; Palazzo, Reginaldo
2015-07-10
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code.
Ancient DNA sequence revealed by error-correcting codes
Brandão, Marcelo M.; Spoladore, Larissa; Faria, Luzinete C. B.; Rocha, Andréa S. L.; Silva-Filho, Marcio C.; Palazzo, Reginaldo
2015-01-01
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code. PMID:26159228
High rate concatenated coding systems using bandwidth efficient trellis inner codes
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1989-01-01
High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.
Hakala, John L; Hung, Joseph C; Mosman, Elton A
2012-09-01
The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.
Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing.
Hargreaves, Adam D; Mulley, John F
2015-01-01
Portable DNA sequencers such as the Oxford Nanopore MinION device have the potential to be truly disruptive technologies, facilitating new approaches and analyses and, in some cases, taking sequencing out of the lab and into the field. However, the capabilities of these technologies are still being revealed. Here we show that single-molecule cDNA sequencing using the MinION accurately characterises venom toxin-encoding genes in the painted saw-scaled viper, Echis coloratus. We find the raw sequencing error rate to be around 12%, improved to 0-2% with hybrid error correction and 3% with de novo error correction. Our corrected data provides full coding sequences and 5' and 3' UTRs for 29 of 33 candidate venom toxins detected, far superior to Illumina data (13/40 complete) and Sanger-based ESTs (15/29). We suggest that, should the current pace of improvement continue, the MinION will become the default approach for cDNA sequencing in a variety of species.
Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing
Hargreaves, Adam D.
2015-01-01
Portable DNA sequencers such as the Oxford Nanopore MinION device have the potential to be truly disruptive technologies, facilitating new approaches and analyses and, in some cases, taking sequencing out of the lab and into the field. However, the capabilities of these technologies are still being revealed. Here we show that single-molecule cDNA sequencing using the MinION accurately characterises venom toxin-encoding genes in the painted saw-scaled viper, Echis coloratus. We find the raw sequencing error rate to be around 12%, improved to 0–2% with hybrid error correction and 3% with de novo error correction. Our corrected data provides full coding sequences and 5′ and 3′ UTRs for 29 of 33 candidate venom toxins detected, far superior to Illumina data (13/40 complete) and Sanger-based ESTs (15/29). We suggest that, should the current pace of improvement continue, the MinION will become the default approach for cDNA sequencing in a variety of species. PMID:26623194
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
FPGA implementation of high-performance QC-LDPC decoder for optical communications
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
NASA Astrophysics Data System (ADS)
Burdo, James S.
This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the difference was less than 1.00, allowing for the existence of the difference within the margin of error. The second was whether the difference between the two values was big enough to prevent their error bars from overlapping. Error analysis was performed both using a one second count and pseudo-Maxwell statistics for a projected 60 second count, giving four criteria for detection. The number of guide tubes meeting these criteria was compared and graphed for each case. Further analysis at extremes of high and low enrichment and long and short burnup times was done using data from assemblies at the Beaver Valley 1 and 2 PWR. In all neutron flux cases, at least two guide tube locations meet all the criteria for detection of pin diversion. At least one location in almost all of the gamma flux cases does. These results show that placing detectors in the empty guide tubes of spent fuel bundles to identify possible pin diversion is feasible.
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
NASA Astrophysics Data System (ADS)
Simley, Eric; Y Pao, Lucy; Gebraad, Pieter; Churchfield, Matthew
2014-06-01
Several sources of error exist in lidar measurements for feedforward control of wind turbines including the ability to detect only radial velocities, spatial averaging, and wind evolution. This paper investigates another potential source of error: the upstream induction zone. The induction zone can directly affect lidar measurements and presents an opportunity for further decorrelation between upstream wind and the wind that interacts with the rotor. The impact of the induction zone is investigated using the combined CFD and aeroelastic code SOWFA. Lidar measurements are simulated upstream of a 5 MW turbine rotor and the true wind disturbances are found using a wind speed estimator and turbine outputs. Lidar performance in the absence of an induction zone is determined by simulating lidar measurements and the turbine response using the aeroelastic code FAST with wind inputs taken far upstream of the original turbine location in the SOWFA wind field. Results indicate that while measurement quality strongly depends on the amount of wind evolution, the induction zone has little effect. However, the optimal lidar preview distance and circular scan radius change slightly due to the presence of the induction zone.
Cloud detection algorithm comparison and validation for operational Landsat data products
Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady
2017-01-01
Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate nonthermal-based algorithm. We give preference to CFMask for operational cloud and cloud shadow detection, as it is derived from a priori knowledge of physical phenomena and is operable without geographic restriction, making it useful for current and future land imaging missions without having to be retrained in a machine-learning environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
LDPC Codes--Structural Analysis and Decoding Techniques
ERIC Educational Resources Information Center
Zhang, Xiaojie
2012-01-01
Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
NASA Astrophysics Data System (ADS)
Kim, Isaac H.
2011-05-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
DNA Barcode Goes Two-Dimensions: DNA QR Code Web Server
Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, “DNA barcode” actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications. PMID:22574113
Health Monitor for Multitasking, Safety-Critical, Real-Time Software
NASA Technical Reports Server (NTRS)
Zoerner, Roger
2011-01-01
Health Manager can detect Bad Health prior to a failure occurring by periodically monitoring the application software by looking for code corruption errors, and sanity-checking each critical data value prior to use. A processor s memory can fail and corrupt the software, or the software can accidentally write to the wrong address and overwrite the executing software. This innovation will continuously calculate a checksum of the software load to detect corrupted code. This will allow a system to detect a failure before it happens. This innovation monitors each software task (thread) so that if any task reports "bad health," or does not report to the Health Manager, the system is declared bad. The Health Manager reports overall system health to the outside world by outputting a square wave signal. If the square wave stops, this indicates that system health is bad or hung and cannot report. Either way, "bad health" can be detected, whether caused by an error, corrupted data, or a hung processor. A separate Health Monitor Task is started and run periodically in a loop that starts and stops pending on a semaphore. Each monitored task registers with the Health Manager, which maintains a count for the task. The registering task must indicate if it will run more or less often than the Health Manager. If the task runs more often than the Health Manager, the monitored task calls a health function that increments the count and verifies it did not go over max-count. When the periodic Health Manager runs, it verifies that the count did not go over the max-count and zeroes it. If the task runs less often than the Health Manager, the periodic Health Manager will increment the count. The monitored task zeroes the count, and both the Health Manager and monitored task verify that the count did not go over the max-count.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan
2015-01-01
Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayler, E; Harrison, A; Eldredge-Hindy, H
Purpose: and Leipzig applicators (VLAs) are single-channel brachytherapy surface applicators used to treat skin lesions up to 2cm diameter. Source dwell times can be calculated and entered manually after clinical set-up or ultrasound. This procedure differs dramatically from CT-based planning; the novelty and unfamiliarity could lead to severe errors. To build layers of safety and ensure quality, a multidisciplinary team created a protocol and applied Failure Modes and Effects Analysis (FMEA) to the clinical procedure for HDR VLA skin treatments. Methods: team including physicists, physicians, nurses, therapists, residents, and administration developed a clinical procedure for VLA treatment. The procedure wasmore » evaluated using FMEA. Failure modes were identified and scored by severity, occurrence, and detection. The clinical procedure was revised to address high-scoring process nodes. Results: Several key components were added to the clinical procedure to minimize risk probability numbers (RPN): -Treatments are reviewed at weekly QA rounds, where physicians discuss diagnosis, prescription, applicator selection, and set-up. Peer review reduces the likelihood of an inappropriate treatment regime. -A template for HDR skin treatments was established in the clinical EMR system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planning physicist, and increases the detectability of an error during the physics second check. -A screen check was implemented during the second check to increase detectability of an error. -To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display. This facilitates data entry and verification. -VLAs are color-coded and labeled to match the EMR prescriptions, which simplifies in-room selection and verification. Conclusion: Multidisciplinary planning and FMEA increased delectability and reduced error probability during VLA HDR Brachytherapy. This clinical model may be useful to institutions implementing similar procedures.« less
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Lin, S.
1984-01-01
Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.
Fast QC-LDPC code for free space optical communication
NASA Astrophysics Data System (ADS)
Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong
2017-02-01
Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
NASA Astrophysics Data System (ADS)
Moussa, Jonathan; Ryan-Anderson, Ciaran
The canonical modern plan for universal quantum computation is a Clifford+T gate set implemented in a topological error-correcting code. This plan has the basic disparity that logical Clifford gates are natural for codes in two spatial dimensions while logical T gates are natural in three. Recent progress has reduced this disparity by proposing logical T gates in two dimensions with doubled, stacked, or gauge color codes, but these proposals lack an error threshold. An alternative universal gate set is Clifford+F, where a fusion (F) gate converts two logical qubits into a logical qudit. We show that logical F gates can be constructed by identifying compatible pairs of qubit and qudit codes that stabilize the same logical subspace, much like the original Bravyi-Kitaev construction of magic state distillation. The simplest example of high-distance compatible codes results in a proposal that is very similar to the stacked color code with the key improvement of retaining an error threshold. Sandia National Labs is a multi-program laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Coded Modulation in C and MATLAB
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
Comparison of memory thresholds for planar qudit geometries
NASA Astrophysics Data System (ADS)
Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad
2017-11-01
We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.
A Semantic Analysis Method for Scientific and Engineering Code
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.
1998-01-01
This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.
Language and Program for Documenting Software Design
NASA Technical Reports Server (NTRS)
Kleine, H.; Zepko, T. M.
1986-01-01
Software Design and Documentation Language (SDDL) provides effective communication medium to support design and documentation of complex software applications. SDDL supports communication among all members of software design team and provides for production of informative documentation on design effort. Use of SDDL-generated document to analyze design makes it possible to eliminate many errors not detected until coding and testing attempted. SDDL processor program translates designer's creative thinking into effective document for communication. Processor performs as many automatic functions as possible, freeing designer's energy for creative effort. SDDL processor program written in PASCAL.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
An Unsupervised Deep Hyperspectral Anomaly Detector
Ma, Ning; Peng, Yu; Wang, Shaojun
2018-01-01
Hyperspectral image (HSI) based detection has attracted considerable attention recently in agriculture, environmental protection and military applications as different wavelengths of light can be advantageously used to discriminate different types of objects. Unfortunately, estimating the background distribution and the detection of interesting local objects is not straightforward, and anomaly detectors may give false alarms. In this paper, a Deep Belief Network (DBN) based anomaly detector is proposed. The high-level features and reconstruction errors are learned through the network in a manner which is not affected by previous background distribution assumption. To reduce contamination by local anomalies, adaptive weights are constructed from reconstruction errors and statistical information. By using the code image which is generated during the inference of DBN and modified by adaptively updated weights, a local Euclidean distance between under test pixels and their neighboring pixels is used to determine the anomaly targets. Experimental results on synthetic and recorded HSI datasets show the performance of proposed method outperforms the classic global Reed-Xiaoli detector (RXD), local RX detector (LRXD) and the-state-of-the-art Collaborative Representation detector (CRD). PMID:29495410
Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills
ERIC Educational Resources Information Center
Waggoner, Dori T.
2011-01-01
This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
The VLSI design of error-trellis syndrome decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.
1985-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
A Theory of False Cognitive Expectancies in Airline Pilots
NASA Astrophysics Data System (ADS)
Cortes, Antonio I.
The Theory of False Cognitive Expectancies was developed by studying high reliability flight operations. Airline pilots depend extensively on cognitive expectancies to perceive, understand, and predict actions and events. Out of 1,363 incident reports submitted by airline pilots to the National Aeronautics and Space Administration Aviation Safety Reporting System over a year's time, 110 reports were found to contain evidence of 127 false cognitive expectancies in pilots. A comprehensive taxonomy was developed with six categories of interest. The dataset of 127 false expectancies was used to initially code tentative taxon values for each category. Intermediate coding through constant comparative analysis completed the taxonomy. The taxonomy was used for the advanced coding of chronological context-dependent visualizations of expectancy factors, known as strands, which depict the major factors in the creation and propagation of each expectancy. Strands were mapped into common networks to detect highly represented expectancy processes. Theoretical integration established 11 sources of false expectancies, the most common expectancy errors, and those conspicuous factors worthy of future study. The most prevalent source of false cognitive expectancies within the dataset was determined to be unconscious individual modeling based on past events. Integrative analyses also revealed relationships between expectancies and flight deck automation, unresolved discrepancies, and levels of situation awareness. Particularly noteworthy were the findings that false expectancies can combine in three possible permutations to diminish situation awareness and examples of how false expectancies can be unwittingly transmitted from one person to another. The theory resulting from this research can enhance the error coding process used during aircraft line oriented safety audits, lays the foundation for developing expectancy management training programs, and will allow researchers to proffer hypotheses for human testing using flight simulators.
Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0
NASA Astrophysics Data System (ADS)
Millán-Otoya, Jorge E.; Boettcher, Stefan
2014-03-01
Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3
Using QR Codes to Differentiate Learning for Gifted and Talented Students
ERIC Educational Resources Information Center
Siegle, Del
2015-01-01
QR codes are two-dimensional square patterns that are capable of coding information that ranges from web addresses to links to YouTube video. The codes save time typing and eliminate errors in entering addresses incorrectly. These codes make learning with technology easier for students and motivationally engage them in news ways.
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.
Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E
2011-01-01
Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during ribosomal decoding induced competence, while decreasing the error rate repressed competence. This pattern of regulation was promoted by the HtrA protease, which selectively repressed competence when translational fidelity was high but not when accuracy was low. Our findings demonstrate that this organism is able to monitor the accuracy of information used for protein biosynthesis and suggest that errors trigger a response addressing both the immediate challenge of misfolded proteins and, through genetic exchange, upstream coding errors that may underlie protein folding defects. This pathway may represent an evolutionary strategy for maintaining the coding integrity of the genome.
Spatial resampling of IDR frames for low bitrate video coding with HEVC
NASA Astrophysics Data System (ADS)
Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick
2015-03-01
As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
NASA Astrophysics Data System (ADS)
Wootton, James R.; Loss, Daniel
2018-05-01
The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.
Pillai, Anilkumar; Medford, Andrew R L
2013-01-01
Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less
GIZMO: Multi-method magneto-hydrodynamics+gravity code
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2014-10-01
GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).