Sample records for parity check codes

  1. Protecting quantum memories using coherent parity check codes

    NASA Astrophysics Data System (ADS)

    Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv

    2018-07-01

    Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2

  2. Method of Error Floor Mitigation in Low-Density Parity-Check Codes

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon (Inventor)

    2014-01-01

    A digital communication decoding method for low-density parity-check coded messages. The decoding method decodes the low-density parity-check coded messages within a bipartite graph having check nodes and variable nodes. Messages from check nodes are partially hard limited, so that every message which would otherwise have a magnitude at or above a certain level is re-assigned to a maximum magnitude.

  3. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  4. Low-Density Parity-Check Code Design Techniques to Simplify Encoding

    NASA Astrophysics Data System (ADS)

    Perez, J. M.; Andrews, K.

    2007-11-01

    This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.

  5. Concurrent error detecting codes for arithmetic processors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.

  6. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  7. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  8. Weight-4 Parity Checks on a Surface Code Sublattice with Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Takita, Maika; Corcoles, Antonio; Magesan, Easwar; Bronn, Nicholas; Hertzberg, Jared; Gambetta, Jay; Steffen, Matthias; Chow, Jerry

    We present a superconducting qubit quantum processor design amenable to the surface code architecture. In such architecture, parity checks on the data qubits, performed by measuring their X- and Z- syndrome qubits, constitute a critical aspect. Here we show fidelities and outcomes of X- and Z-parity measurements done on a syndrome qubit in a full plaquette consisting of one syndrome qubit coupled via bus resonators to four code qubits. Parities are measured after four code qubits are prepared into sixteen initial states in each basis. Results show strong dependence on ZZ between qubits on the same bus resonators. This work is supported by IARPA under Contract W911NF-10-1-0324.

  9. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  10. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    NASA Astrophysics Data System (ADS)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  11. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  12. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  13. An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.

  14. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  15. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  16. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  17. Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.

    PubMed

    Djordjevic, Ivan B

    2010-05-01

    I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.

  18. Hyperbolic and semi-hyperbolic surface codes for quantum storage

    NASA Astrophysics Data System (ADS)

    Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.

    2017-09-01

    We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.

  19. Coded Modulation in C and MATLAB

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

  20. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  1. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  2. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  3. The small stellated dodecahedron code and friends.

    PubMed

    Conrad, J; Chamberland, C; Breuckmann, N P; Terhal, B M

    2018-07-13

    We explore a distance-3 homological CSS quantum code, namely the small stellated dodecahedron code, for dense storage of quantum information and we compare its performance with the distance-3 surface code. The data and ancilla qubits of the small stellated dodecahedron code can be located on the edges respectively vertices of a small stellated dodecahedron, making this code suitable for three-dimensional connectivity. This code encodes eight logical qubits into 30 physical qubits (plus 22 ancilla qubits for parity check measurements) in contrast with one logical qubit into nine physical qubits (plus eight ancilla qubits) for the surface code. We develop fault-tolerant parity check circuits and a decoder for this code, allowing us to numerically assess the circuit-based pseudo-threshold.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Authors.

  4. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  5. BCH codes for large IC random-access memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1983-01-01

    In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.

  6. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  7. Performance of Low-Density Parity-Check Coded Modulation

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.

  8. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    PubMed

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  9. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  10. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    PubMed Central

    Revathy, M.; Saravanan, R.

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017

  11. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  12. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  13. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  14. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  15. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  16. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  17. Performance Analysis of a JTIDS/Link-16-type Waveform Transmitted over Slow, Flat Nakagami Fading Channels in the Presence of Narrowband Interference

    DTIC Science & Technology

    2008-12-01

    The effective two-way tactical data rate is 3,060 bits per second. Note that there is no parity check or forward error correction (FEC) coding used in...of 1800 bits per second. With the use of FEC coding , the channel data rate is 2250 bits per second; however, the information data rate is still the...Link-11. If the parity bits are included, the channel data rate is 28,800 bps. If FEC coding is considered, the channel data rate is 59,520 bps

  18. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  19. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  20. Design and Implementation of Secure and Reliable Communication using Optical Wireless Communication

    NASA Astrophysics Data System (ADS)

    Saadi, Muhammad; Bajpai, Ambar; Zhao, Yan; Sangwongngam, Paramin; Wuttisittikulkij, Lunchakorn

    2014-11-01

    Wireless networking intensify the tractability in the home and office environment to connect the internet without wires but at the cost of risks associated with stealing the data or threat of loading malicious code with the intention of harming the network. In this paper, we proposed a novel method of establishing a secure and reliable communication link using optical wireless communication (OWC). For security, spatial diversity based transmission using two optical transmitters is used and the reliability in the link is achieved by a newly proposed method for the construction of structured parity check matrix for binary Low Density Parity Check (LDPC) codes. Experimental results show that a successful secure and reliable link between the transmitter and the receiver can be achieved by using the proposed novel technique.

  1. Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels

    ERIC Educational Resources Information Center

    Wang, Han

    2010-01-01

    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…

  2. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  3. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  4. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  5. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  6. Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks

    DTIC Science & Technology

    2010-08-31

    Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6

  7. LDPC Codes--Structural Analysis and Decoding Techniques

    ERIC Educational Resources Information Center

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  8. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  9. Fixed-point Design of the Lattice-reduction-aided Iterative Detection and Decoding Receiver for Coded MIMO Systems

    DTIC Science & Technology

    2011-01-01

    reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions

  10. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  11. PSEUDO-CODEWORD LANDSCAPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHERTKOV, MICHAEL; STEPANOV, MIKHAIL

    2007-01-10

    The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less

  12. A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam

    2005-01-01

    We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.

  13. Fast and Flexible Successive-Cancellation List Decoders for Polar Codes

    NASA Astrophysics Data System (ADS)

    Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.

    2017-11-01

    Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.

  14. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  15. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  16. Measurement Techniques for Clock Jitter

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Schlesinger, Adam

    2012-01-01

    NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.

  17. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  18. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  19. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  20. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  1. Construction of a new regular LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng

    2013-05-01

    A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.

  2. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.

  3. Strategic and Tactical Decision-Making Under Uncertainty

    DTIC Science & Technology

    2006-01-03

    message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18

  4. Pilotless Frame Synchronization Using LDPC Code Constraints

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Vissasenor, John

    2009-01-01

    A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.

  5. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  6. The application of LDPC code in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  7. Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-07-07

    Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.

  8. Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il

    2014-08-15

    Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.

  9. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  10. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  11. An efficient decoding for low density parity check codes

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie

    2009-12-01

    Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.

  12. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  13. Finite-connectivity spin-glass phase diagrams and low-density parity check codes.

    PubMed

    Migliorini, Gabriele; Saad, David

    2006-02-01

    We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate , an RS critical transition point at while the critical RSB transition point is located at , to be compared with the corresponding Shannon bound . For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed.

  14. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  15. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  16. Artificial Intelligence in Space Platforms.

    DTIC Science & Technology

    1984-12-01

    technician would be resposible for filling the data base with DSCS particular information concerning thrusters, 90 b...fault conditions and performing predefined self -preserving (entering a safe-hold stat9) switching actions. Is capable of storing contingency or...on-board for syntactical errors (parity, sign, logic, time). Uses coding or other self -checking techniques to minimize the effects of Internally

  17. Soft-Decision-Data Reshuffle to Mitigate Pulsed Radio Frequency Interference Impact on Low-Density-Parity-Check Code Performance

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David

    2011-01-01

    This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.

  18. Received response based heuristic LDPC code for short-range non-line-of-sight ultraviolet communication.

    PubMed

    Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian

    2017-03-06

    Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.

  19. Error-correcting codes on scale-free networks

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Hoon; Ko, Young-Jo

    2004-06-01

    We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.

  20. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  1. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  2. Experimental research and comparison of LDPC and RS channel coding in ultraviolet communication systems.

    PubMed

    Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin

    2014-03-10

    We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.

  3. PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.

    PubMed

    Djordjevic, Ivan B

    2007-04-02

    The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.

  4. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  5. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  6. Performance Evaluation of LDPC Coding and Iterative Decoding System in BPM R/W Channel Affected by Head Field Gradient, Media SFD and Demagnetization Field

    NASA Astrophysics Data System (ADS)

    Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki

    We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.

  7. Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.

    PubMed

    Han, Changcai; Yang, Jinsheng

    2017-10-30

    The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.

  8. Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network

    PubMed Central

    Han, Changcai; Yang, Jinsheng

    2017-01-01

    The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155

  9. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.

  10. Protograph LDPC Codes for the Erasure Channel

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  11. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    PubMed Central

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  12. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  13. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  14. Capacity Maximizing Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  15. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  16. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  17. A Direct Mapping of Max k-SAT and High Order Parity Checks to a Chimera Graph

    PubMed Central

    Chancellor, N.; Zohren, S.; Warburton, P. A.; Benjamin, S. C.; Roberts, S.

    2016-01-01

    We demonstrate a direct mapping of max k-SAT problems (and weighted max k-SAT) to a Chimera graph, which is the non-planar hardware graph of the devices built by D-Wave Systems Inc. We further show that this mapping can be used to map a similar class of maximum satisfiability problems where the clauses are replaced by parity checks over potentially large numbers of bits. The latter is of specific interest for applications in decoding for communication. We discuss an example in which the decoding of a turbo code, which has been demonstrated to perform near the Shannon limit, can be mapped to a Chimera graph. The weighted max k-SAT problem is the most general class of satisfiability problems, so our result effectively demonstrates how any satisfiability problem may be directly mapped to a Chimera graph. Our methods faithfully reproduce the low energy spectrum of the target problems, so therefore may also be used for maximum entropy inference. PMID:27857179

  18. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  19. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  20. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  1. Future capabilities for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Berner, J. B.; Bryant, S. H.; Andrews, K. S.

    2004-01-01

    This paper will look at three new capabilities that are in different stages of development. First, turbo decoding, which provides improved telemetry performance for data rates up to about 1 Mbps, will be discussed. Next, pseudo-noise ranging will be presented. Pseudo-noise ranging has several advantages over the current sequential ranging, anmely easier operations, improved performance, and the capability to be used in a regenerative implementation on a spacecraft. Finally, Low Density Parity Check decoding will be discussed. LDPC codes can provide performance that matches or slightly exceed turbo codes, but are designed for use in the 10 Mbps range.

  2. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    PubMed

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  3. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  4. Optical LDPC decoders for beyond 100 Gbits/s optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2009-05-01

    We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.

  5. High performance and cost effective CO-OFDM system aided by polar code.

    PubMed

    Liu, Ling; Xiao, Shilin; Fang, Jiafei; Zhang, Lu; Zhang, Yunhao; Bi, Meihua; Hu, Weisheng

    2017-02-06

    A novel polar coded coherent optical orthogonal frequency division multiplexing (CO-OFDM) system is proposed and demonstrated through experiment for the first time. The principle of a polar coded CO-OFDM signal is illustrated theoretically and the suitable polar decoding method is discussed. Results show that the polar coded CO-OFDM signal achieves a net coding gain (NCG) of more than 10 dB at bit error rate (BER) of 10-3 over 25-Gb/s 480-km transmission in comparison with conventional CO-OFDM. Also, compared to the 25-Gb/s low-density parity-check (LDPC) coded CO-OFDM 160-km system, the polar code provides a NCG of 0.88 dB @BER = 10-3. Moreover, the polar code can relieve the laser linewidth requirement massively to get a more cost-effective CO-OFDM system.

  6. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  7. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  8. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  9. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    PubMed

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  10. FPGA implementation of advanced FEC schemes for intelligent aggregation networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.

  11. Experimental demonstration of the transmission performance for LDPC-coded multiband OFDM ultra-wideband over fiber system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu

    2015-01-01

    To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.

  12. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  13. An LDPC Decoder Architecture for Wireless Sensor Network Applications

    PubMed Central

    Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724

  14. An LDPC decoder architecture for wireless sensor network applications.

    PubMed

    Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.

  15. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  16. Design of a Microprogram Control Unit with Concurrent Error Detection.

    DTIC Science & Technology

    1984-08-01

    I fxoot Office of Naval Research N/A N00039-80-C-0556 ta. ADDRESS (City. St.. and ZIP Cod 10. SOURCE OF FUNOING N0. -PROGRAM PROJECT TASK WORK UNIT...However, the CED concept is mainly applied to various codes data transmission, and simple functional units, such as arithmetic units. Little work has...been done in the control unit area. Previous work is primarily in the use of clanical self-checking circuits, using bit slicin& parity, and m-out-of-n

  17. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1991-01-01

    Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.

  18. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  19. Frame Synchronization Without Attached Sync Markers

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2011-01-01

    We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).

  20. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  1. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  2. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  3. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  4. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  5. High-Performance CCSDS AOS Protocol Implementation in FPGA

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson

    2010-01-01

    The Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) space data link protocol provides a framing layer between channel coding such as LDPC (low-density parity-check) and higher-layer link multiplexing protocols such as CCSDS Encapsulation Service, which is described in the following article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS AOS protocol implementation needs to be optimized to both reduce energy consumption and operate at a high rate.

  6. SCaN Network Ground Station Receiver Performance for Future Service Support

    NASA Technical Reports Server (NTRS)

    Estabrook, Polly; Lee, Dennis; Cheng, Michael; Lau, Chi-Wung

    2012-01-01

    Objectives: Examine the impact of providing the newly standardized CCSDS Low Density Parity Check (LDPC) codes to the SCaN return data service on the SCaN SN and DSN ground stations receivers: SN Current Receiver: Integrated Receiver (IR). DSN Current Receiver: Downlink Telemetry and Tracking (DTT) Receiver. Early Commercial-Off-The-Shelf (COTS) prototype of the SN User Service Subsystem Component Replacement (USS CR) Narrow Band Receiver. Motivate discussion of general issues of ground station hardware design to enable simple and cheap modifications for support of future services.

  7. Design space exploration of high throughput finite field multipliers for channel coding on Xilinx FPGAs

    NASA Astrophysics Data System (ADS)

    de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.

    2012-09-01

    Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.

  8. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    NASA Astrophysics Data System (ADS)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  9. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  10. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  11. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  12. Quick-low-density parity check and dynamic threshold voltage optimization in 1X nm triple-level cell NAND flash memory with comprehensive analysis of endurance, retention-time, and temperature variation

    NASA Astrophysics Data System (ADS)

    Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2016-08-01

    NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.

  13. RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2012-11-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  14. PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2013-09-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  15. LDPC product coding scheme with extrinsic information for bit patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2017-05-01

    Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.

  16. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  17. High-throughput GPU-based LDPC decoding

    NASA Astrophysics Data System (ADS)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  18. 25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.

    PubMed

    Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2013-01-28

    We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.

  19. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  20. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  1. Implementing a strand of a scalable fault-tolerant quantum computing fabric.

    PubMed

    Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M

    2014-06-24

    With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.

  2. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  3. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  4. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, David D.; Napolitano, Jr., Leonard M.; Redinbo, G. Robert; Shreeve, William O.

    1994-01-01

    An internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum is provided with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented.

  5. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, D.D.; Napolitano, L.M. Jr.; Redinbo, G.R.; Shreeve, W.O.

    1994-03-01

    An internally fault-tolerant data error detection and correction integrated circuit device and a method of operating same is described. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented. 8 figures.

  6. A Note on a Sampling Theorem for Functions over GF(q)n Domain

    NASA Astrophysics Data System (ADS)

    Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi

    In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.

  7. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  8. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    NASA Astrophysics Data System (ADS)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  9. Design and performance investigation of LDPC-coded upstream transmission systems in IM/DD OFDM-PONs

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoxue; Guo, Lei; Wu, Jingjing; Ning, Zhaolong

    2016-12-01

    In Intensity-Modulation Direct-Detection (IM/DD) Orthogonal Frequency Division Multiplexing Passive Optical Networks (OFDM-PONs), aside from Subcarrier-to-Subcarrier Intermixing Interferences (SSII) induced by square-law detection, the same laser frequency for data sending from Optical Network Units (ONUs) results in ONU-to-ONU Beating Interferences (OOBI) at the receiver. To mitigate those interferences, we design a Low-Density Parity Check (LDPC)-coded and spectrum-efficient upstream transmission system. A theoretical channel model is also derived, in order to analyze the detrimental factors influencing system performances. Simulation results demonstrate that the receiver sensitivity is improved 3.4 dB and 2.5 dB under QPSK and 8QAM, respectively, after 100 km Standard Single-Mode Fiber (SSMF) transmission. Furthermore, the spectrum efficiency can be improved by about 50%.

  10. MIMO-OFDM System's Performance Using LDPC Codes for a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Daoud, Omar; Alani, Omar

    This work deals with the performance of a Sniffer Mobile Robot (SNFRbot)-based spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM) transmission technology. The use of Multi-Input Multi-Output (MIMO)-OFDM technology increases the wireless transmission rate without increasing transmission power or bandwidth. A generic multilayer architecture of the SNFRbot is proposed with low power and low cost. Some experimental results are presented and show the efficiency of sniffing deadly gazes, sensing high temperatures and sending live videos of the monitored situation. Moreover, simulation results show the achieved performance by tackling the Peak-to-Average Power Ratio (PAPR) problem of the used technology using Low Density Parity Check (LDPC) codes; and the effect of combating the PAPR on the bit error rate (BER) and the signal to noise ratio (SNR) over a Doppler spread channel.

  11. More box codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    A new investigation shows that, starting from the BCH (21,15;3) code represented as a 7 x 3 matrix and adding a row and column to add even parity, one obtains an 8 x 4 matrix (32,15;8) code. An additional dimension is obtained by specifying odd parity on the rows and even parity on the columns, i.e., adjoining to the 8 x 4 matrix, the matrix, which is zero except for the fourth column (of all ones). Furthermore, any seven rows and three columns will form the BCH (21,15;3) code. This box code has the same weight structure as the quadratic residue and BCH codes of the same dimensions. Whether there exists an algebraic isomorphism to either code is as yet unknown.

  12. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  13. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  14. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  15. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  16. On the optimum signal constellation design for high-speed optical transport networks.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2012-08-27

    In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.

  17. Information rates of probabilistically shaped coded modulation for a multi-span fiber-optic communication system with 64QAM

    NASA Astrophysics Data System (ADS)

    Fehenberger, Tobias

    2018-02-01

    This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.

  18. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  19. Transmission over UWB channels with OFDM system using LDPC coding

    NASA Astrophysics Data System (ADS)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  20. Analysis of soft-decision FEC on non-AWGN channels.

    PubMed

    Cho, Junho; Xie, Chongjin; Winzer, Peter J

    2012-03-26

    Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.

  1. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  2. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  3. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE PAGES

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...

    2017-10-27

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  4. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.

    2017-10-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  5. Bilayer Protograph Codes for Half-Duplex Relay Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.

  6. Simulations of linear and Hamming codes using SageMath

    NASA Astrophysics Data System (ADS)

    Timur, Tahta D.; Adzkiya, Dieky; Soleha

    2018-03-01

    Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.

  7. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.

  8. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  9. Demonstration of Weight-Four Parity Measurements in the Surface Code Architecture.

    PubMed

    Takita, Maika; Córcoles, A D; Magesan, Easwar; Abdo, Baleegh; Brink, Markus; Cross, Andrew; Chow, Jerry M; Gambetta, Jay M

    2016-11-18

    We present parity measurements on a five-qubit lattice with connectivity amenable to the surface code quantum error correction architecture. Using all-microwave controls of superconducting qubits coupled via resonators, we encode the parities of four data qubit states in either the X or the Z basis. Given the connectivity of the lattice, we perform a full characterization of the static Z interactions within the set of five qubits, as well as dynamical Z interactions brought along by single- and two-qubit microwave drives. The parity measurements are significantly improved by modifying the microwave two-qubit gates to dynamically remove nonideal Z errors.

  10. Information Theory, Inference and Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Mackay, David J. C.

    2003-10-01

    Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.

  11. Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage

    NASA Technical Reports Server (NTRS)

    Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.

    2010-01-01

    The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.

  12. Performance analysis of LDPC codes on OOK terahertz wireless channels

    NASA Astrophysics Data System (ADS)

    Chun, Liu; Chang, Wang; Jun-Cheng, Cao

    2016-02-01

    Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).

  13. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  14. The development of product parity sensitivity in children with mathematics learning disability and in typical achievers.

    PubMed

    Rotem, Avital; Henik, Avishai

    2013-02-01

    Parity helps us determine whether an arithmetic equation is true or false. The current research examines the development of sensitivity to parity cues in multiplication in typically achieving (TA) children (grades 2, 3, 4 and 6) and in children with mathematics learning disabilities (MLD, grades 6 and 8), via a verification task. In TA children the onset of parity sensitivity was observed at the beginning of 3rd grade, whereas in children with MLD it was documented only in 8th grade. These results suggest that children with MLD develop parity aspects of number sense, though later than TA children. To check the plausibility of equations, children used mainly the multiplication parity rule rather than familiarity with even products. Similar to observations in adults, parity sensitivity was largest for problems with two even operands, moderate for problems with one even and one odd operand, and smallest for problems with two odd operands. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. 500  Gb/s free-space optical transmission over strong atmospheric turbulence channels.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2016-07-15

    We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.

  16. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  17. Fast QC-LDPC code for free space optical communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  18. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    PubMed

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  19. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  20. Characterization of LDPC-coded orbital angular momentum modes transmission and multiplexing over a 50-km fiber.

    PubMed

    Wang, Andong; Zhu, Long; Chen, Shi; Du, Cheng; Mo, Qi; Wang, Jian

    2016-05-30

    Mode-division multiplexing over fibers has attracted increasing attention over the last few years as a potential solution to further increase fiber transmission capacity. In this paper, we demonstrate the viability of orbital angular momentum (OAM) modes transmission over a 50-km few-mode fiber (FMF). By analyzing mode properties of eigen modes in an FMF, we study the inner mode group differential modal delay (DMD) in FMF, which may influence the transmission capacity in long-distance OAM modes transmission and multiplexing. To mitigate the impact of large inner mode group DMD in long-distance fiber-based OAM modes transmission, we use low-density parity-check (LDPC) codes to increase the system reliability. By evaluating the performance of LDPC-coded single OAM mode transmission over 50-km fiber, significant coding gains of >4 dB, 8 dB and 14 dB are demonstrated for 1-Gbaud, 2-Gbaud and 5-Gbaud quadrature phase-shift keying (QPSK) signals, respectively. Furthermore, in order to verify and compare the influence of DMD in long-distance fiber transmission, single OAM mode transmission over 10-km FMF is also demonstrated in the experiment. Finally, we experimentally demonstrate OAM multiplexing and transmission over a 50-km FMF using LDPC-coded 1-Gbaud QPSK signals to compensate the influence of mode crosstalk and DMD in the 50 km FMF.

  1. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  2. Integrated Performance of Next Generation High Data Rate Receiver and AR4JA LDPC Codec for Space Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Lyubarev, Mark; Nakashima, Michael A.; Andrews, Kenneth S.; Lee, Dennis

    2008-01-01

    Low-density parity-check (LDPC) codes are the state-of-the-art in forward error correction (FEC) technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate-Repeat-by-4-Jagged- Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and lengths 1024, 4096, 16384 information bits. Performance is less than one dB from capacity for all combinations.Integrating a stand-alone LDPC decoder with a commercial-off-the-shelf (COTS) receiver faces additional challenges than building a single receiver-decoder unit from scratch. In this work, we outline the issues and show that these additional challenges can be over-come by simple solutions. To demonstrate that an LDPC decoder can be made to work seamlessly with a COTS receiver, we interface an AR4JA LDPC decoder developed on a field-programmable gate array (FPGA) with a modern high data rate receiver and mea- sure the combined receiver-decoder performance. Through optimizations that include an improved frame synchronizer and different soft-symbol scaling algorithms, we show that a combined implementation loss of less than one dB is possible and therefore, most of the coding gain evidence in theory can also be obtained in practice. Our techniques can benefit any modem that utilizes an advanced FEC code.

  3. The timing of antenatal care initiation and the content of care in Sindh, Pakistan.

    PubMed

    Agha, Sohail; Tappis, Hannah

    2016-07-27

    Policymakers and program planners consider antenatal care (ANC) coverage to be a primary measure of improvements in maternal health. Yet, evidence from multiple countries indicates that ANC coverage has no necessary relationship with the content of services provided. This study examines the relationship between the timing of the first ANC check-up, a potential predictor of the content of services, and the provision of WHO recommended services to women during their pregnancy. The study uses data from a representative household survey of Sindh with a sample comprising of 4,000 women aged 15-49 who had had a live birth in the two years before the survey. The survey obtained information about the elements of care provided during pregnancy, the timing of the first ANC check-up, the number of ANC visits made during the last pregnancy and women's socio-economic and demographic characteristics. Bivariate analysis was conducted to examine the relationship between the proportion of women who receive six WHO recommended services and the timing of their first ANC check-up. Multivariate analysis was conducted to identify predictors of the number of elements of care provided. While most women in Sindh (87 %) receive an ANC check-up, its timing varies by parity, education and household wealth. The median time to the first ANC check-up was 3 months for women in the richest and 7 months for women in the poorest wealth quintiles. In multivariate analysis, wealth, education, parity and age at marriage were significant predictors of the number of elements of care provided. Women who received an early ANC check-up were much more likely to receive WHO recommended services than other women, independent of a range of socio-economic and demographic variables and independent of the number of ANC visits made during pregnancy. In Sindh, the timing of the first ANC check-up has an independent effect on the content of care provided to pregnant women. While it is extremely important that providers are adequately trained and motivated to provide the WHO recommended standards of care, these findings suggest that motivating women to make an early first ANC check-up may be another mechanism through which the quality of care provided may be improved. Such a focus is most likely to benefit the poorest, least educated and highest parity women. Based on these findings, we recommend that routine data collected at health facilities in Pakistan should include the month of pregnancy at the time of the first ANC check-up.

  4. Measurement of the Parity-Violating directional Gamma-ray Asymmetry in Polarized Neutron Capture on ^35Cl

    NASA Astrophysics Data System (ADS)

    Fomin, Nadia

    2012-03-01

    The NPDGamma experiment aims to measure the parity-odd correlation between the neutron spin and the direction of the emitted photon in neutron-proton capture. A parity violating asymmetry (to be measured to 10-8) from this process can be directly related to the strength of the hadronic weak interaction between nucleons. As part of the commissioning runs on the Fundamental Neutron Physics beamline at the Spallation Neutron Source at ORNL, the gamma-ray asymmetry from the parity-violating capture of cold neutrons on ^35Cl was measured, primarily to check for systematic effects and false asymmtries. The current precision from existing world measurements on this asymmetry is at the level of 10-6 and we believe we can improve it. The analysis methodology as well as preliminary results will be presented.

  5. Direct-detection Free-space Laser Transceiver Test-bed

    NASA Technical Reports Server (NTRS)

    Krainak, Michael A.; Chen, Jeffrey R.; Dabney, Philip W.; Ferrara, Jeffrey F.; Fong, Wai H.; Martino, Anthony J.; McGarry Jan. F.; Merkowitz, Stephen M.; Principe, Caleb M.; Sun, Siaoli; hide

    2008-01-01

    NASA Goddard Space Flight Center is developing a direct-detection free-space laser communications transceiver test bed. The laser transmitter is a master-oscillator power amplifier (MOPA) configuration using a 1060 nm wavelength laser-diode with a two-stage multi-watt Ytterbium fiber amplifier. Dual Mach-Zehnder electro-optic modulators provide an extinction ratio greater than 40 dB. The MOPA design delivered 10-W average power with low-duty-cycle PPM waveforms and achieved 1.7 kW peak power. We use pulse-position modulation format with a pseudo-noise code header to assist clock recovery and frame boundary identification. We are examining the use of low-density-parity-check (LDPC) codes for forward error correction. Our receiver uses an InGaAsP 1 mm diameter photocathode hybrid photomultiplier tube (HPMT) cooled with a thermo-electric cooler. The HPMT has 25% single-photon detection efficiency at 1064 nm wavelength with a dark count rate of 60,000/s at -22 degrees Celsius and a single-photon impulse response of 0.9 ns. We report on progress toward demonstrating a combined laser communications and ranging field experiment.

  6. Structure of the Odd-Odd Nucleus {sup 188}Re

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balodis, M.; Berzins, J.; Simonova, L.

    2009-01-28

    Thermal neutron capture gamma-ray spectra for {sup 187}Re(n,{gamma}){sup 188}Re reaction were measured. Singles and coincidence spectra were detected in order to develop the level scheme. The evaluation is in progress, of which the first results are obtained from the analysis of coincidence spectra, allowing to check the level scheme below 500 keV excitation energy. Seven low-energy negative parity bands are developed in order to find better energies for rotational levels. With a good confidence, a few positive parity bands are developed as well. Rotor plus two quasiparticle model calculations, employing effective matrix element method are performed for the system ofmore » six negative parity rotational bands.« less

  7. Two-stage cross-talk mitigation in an orbital-angular-momentum-based free-space optical communication system.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2017-08-15

    We propose and experimentally demonstrate a two-stage cross-talk mitigation method in an orbital-angular-momentum (OAM)-based free-space optical communication system, which is enabled by combining spatial offset and low-density parity-check (LDPC) coded nonuniform signaling. Different from traditional OAM multiplexing, where the OAM modes are centrally aligned for copropagation, the adjacent OAM modes (OAM states 2 and -6 and OAM states -2 and 6) in our proposed scheme are spatially offset to mitigate the mode cross talk. Different from traditional rectangular modulation formats, which transmit equidistant signal points with uniform probability, the 5-quadrature amplitude modulation (5-QAM) and 9-QAM are introduced to relieve cross-talk-induced performance degradation. The 5-QAM and 9-QAM formats are based on the Huffman coding technique, which can potentially achieve great cross-talk tolerance by combining them with corresponding nonbinary LDPC codes. We demonstrate that cross talk can be reduced by 1.6 dB and 1 dB via spatial offset for OAM states ±2 and ±6, respectively. Compared to quadrature phase shift keying and 8-QAM formats, the LDPC-coded 5-QAM and 9-QAM are able to bring 1.1 dB and 5.4 dB performance improvements in the presence of atmospheric turbulence, respectively.

  8. Combined group ECC protection and subgroup parity protection

    DOEpatents

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  9. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  10. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  11. An improved control mode for the ping-pong protocol operation in imperfect quantum channels

    NASA Astrophysics Data System (ADS)

    Zawadzki, Piotr

    2015-07-01

    Quantum direct communication (QDC) can bring confidentiality of sensitive information without any encryption. A ping-pong protocol, a well-known example of entanglement-based QDC, offers asymptotic security in a perfect quantum channel. However, it has been shown (Wójcik in Phys Rev Lett 90(15):157901, 2003. doi:10.1103/PhysRevLett.90.157901) that it is not secure in the presence of losses. Moreover, legitimate parities cannot rely on dense information coding due to possible undetectable eavesdropping even in the perfect setting (Pavičić in Phys Rev A 87(4):042326, 2013. doi:10.1103/PhysRevA.87.042326). We have identified the source of the above-mentioned weaknesses in the incomplete check of the EPR pair coherence. We propose an improved version of the control mode, and we discuss its relation to the already-known attacks that undermine the QDC security. It follows that the new control mode detects these attacks with high probability and independently on a quantum channel type. As a result, an asymptotic security of the QDC communication can be maintained for imperfect quantum channels, also in the regime of dense information coding.

  12. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  13. Combined group ECC protection and subgroup parity protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Cheng, Dong; Heidelberger, Philip

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less

  14. 42 CFR 457.480 - Preexisting condition exclusions and relation to other laws.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the Internal Revenue Code of 1986. (3) Mental Health Parity Act (MHPA). Health benefits coverage under... 1996 regarding parity in the application of annual and lifetime dollar limits to mental health benefits... 42 Public Health 4 2012-10-01 2012-10-01 false Preexisting condition exclusions and relation to...

  15. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  16. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  17. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  18. Optical bias selector based on a multilayer a-SiC:H optical filter

    NASA Astrophysics Data System (ADS)

    Vieira, M.; Vieira, M. A.; Louro, P.

    2017-08-01

    In this paper we present a MUX/DEMUX device based on a multilayer a-SiC:H optical filter that requires nearultraviolet steady state optical switches to select desired wavelengths in the visible range. Spectral response and transmittance measurements are presented and show the feasibility of tailoring the wavelength and bandwidth of a polychromatic mixture of different wavelengths. The selector filter is realized by using a two terminal double pi'n/pin a-SiC:H photodetector. Five visible communication channels are transmitted together, each one with a specific bit sequence. The combined optical signal is analyzed by reading out the photocurrent, under near-UV front steady state background. Data shows that 25 current levels are detected and corresponds to the thirty-two on/off possible states. The proximity of the magnitude of consecutive levels causes occasional errors in the decoded information. To minimize the errors, four parity bit are generated and stored along with the data word. The parity of the word is checked after reading the word to detect and correct the transmitted data. Results show that the background works as a selector in the visible range, shifting the sensor sensitivity and together with the parity check bits allows the identification and decoding of the different input channels. A transmission capability of 60 kbps using the generated codeword was achieved. An optoeletronic model gives insight on the system physics.

  19. Description of the L1C signal

    USGS Publications Warehouse

    Betz, J.W.; Blanco, M.A.; Cahn, C.R.; Dafesh, P.A.; Hegarty, C.J.; Hudnut, K.W.; Kasemsri, V.; Keegan, R.; Kovach, K.; Lenahan, L.S.; Ma, H.H.; Rushanan, J.J.; Sklar, D.; Stansell, T.A.; Wang, C.C.; Yi, S.K.

    2006-01-01

    Detailed design of the modernized LI civil signal (L1C) signal has been completed, and the resulting draft Interface Specification IS-GPS-800 was released in Spring 2006. The novel characteristics of the optimized L1C signal design provide advanced capabilities while offering to receiver designers considerable flexibility in how to use these capabilities. L1C provides a number of advanced features, including: 75% of power in a pilot component for enhanced signal tracking, advanced Weilbased spreading codes, an overlay code on the pilot that provides data message synchronization, support for improved reading of clock and ephemeris by combining message symbols across messages, advanced forward error control coding, and data symbol interleaving to combat fading. The resulting design offers receiver designers the opportunity to obtain unmatched performance in many ways. This paper describes the design of L1C. A summary of LIC's background and history is provided. The signal description then proceeds with the overall signal structure consisting of a pilot component and a carrier component. The new L1C spreading code family is described, along with the logic used for generating these spreading codes. Overlay codes on the pilot channel are also described, as is the logic used for generating the overlay codes. Spreading modulation characteristics are summarized. The data message structure is also presented, showing the format for providing time, ephemeris, and system data to users, along with features that enable receivers to perform code combining. Encoding of rapidly changing time bits is described, as are the Low Density Parity Check codes used for forward error control of slowly changing time bits, clock, ephemeris, and system data. The structure of the interleaver is also presented. A summary of L 1C's unique features and their benefits is provided, along with a discussion of the plan for L1C implementation.

  20. Fault tolerance in space-based digital signal processing and switching systems: Protecting up-link processing resources, demultiplexer, demodulator, and decoder

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1994-01-01

    Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.

  1. Bar code-based pre-transfusion check in pre-operative autologous blood donation.

    PubMed

    Ohsaka, Akimichi; Furuta, Yoshiaki; Ohsawa, Toshiya; Kobayashi, Mitsue; Abe, Katsumi; Inada, Eiichi

    2010-10-01

    The objective of this study was to demonstrate the feasibility of a bar code-based identification system for the pre-transfusion check at the bedside in the setting of pre-operative autologous blood donation (PABD). Between July 2003 and December 2008 we determined the compliance rate and causes of failure of electronic bedside checking for PABD transfusion. A total of 5627 (9% of all transfusions) PABD units were administered without a single mistransfusion. The overall rate of compliance with electronic checking was 99%. The bar code-based identification system was applicable to the pre-transfusion check for PABD transfusion. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Evaluation of H.264/AVC over IEEE 802.11p vehicular networks

    NASA Astrophysics Data System (ADS)

    Rozas-Ramallal, Ismael; Fernández-Caramés, Tiago M.; Dapena, Adriana; García-Naya, José Antonio

    2013-12-01

    The capacity of vehicular networks to offer non-safety services, like infotainment applications or the exchange of multimedia information between vehicles, have attracted a great deal of attention to the field of Intelligent Transport Systems (ITS). In particular, in this article we focus our attention on IEEE 802.11p which defines enhancements to IEEE 802.11 required to support ITS applications. We present an FPGA-based testbed developed to evaluate H.264/AVC (Advanced Video Coding) video transmission over vehicular networks. The testbed covers some of the most common situations in vehicle-to-vehicle and roadside-to-vehicle communications and it is highly flexible, allowing the performance evaluation of different vehicular standard configurations. We also show several experimental results to illustrate the quality obtained when H.264/AVC encoded video is transmitted over IEEE 802.11p networks. The quality is measured considering two important parameters: the percentage of recovered group of pictures and the frame quality. In order to improve performance, we propose to substitute the convolutional channel encoder used in IEEE 802.11p for a low-density parity-check code encoder. In addition, we suggest a simple strategy to decide the optimum number of iterations needed to decode each packet received.

  3. Complementary code and digital filtering for detection of weak VHF radar signals from the mesoscale. [SOUSY-VHF radar, Harz Mountains, Germany

    NASA Technical Reports Server (NTRS)

    Schmidt, G.; Ruster, R.; Czechowsky, P.

    1983-01-01

    The SOUSY-VHF-Radar operates at a frequency of 53.5 MHz in a valley in the Harz mountains, Germany, 90 km from Hanover. The radar controller, which is programmed by a 16-bit computer holds 1024 program steps in core and controls, via 8 channels, the whole radar system: in particular the master oscillator, the transmitter, the transmit-receive-switch, the receiver, the analog to digital converter, and the hardware adder. The high-sensitivity receiver has a dynamic range of 70 dB and a video bandwidth of 1 MHz. Phase coding schemes are applied, in particular for investigations at mesospheric heights, in order to carry out measurements with the maximum duty cycle and the maximum height resolution. The computer takes the data from the adder to store it in magnetic tape or disc. The radar controller is programmed by the computer using simple FORTRAN IV statements. After the program has been loaded and the computer has started the radar controller, it runs automatically, stopping at the program end. In case of errors or failures occurring during the radar operation, the radar controller is shut off caused either by a safety circuit or by a power failure circuit or by a parity check system.

  4. External cephalic version for singleton breech presentation: proposal of a practical check-list for obstetricians.

    PubMed

    Indraccolo, U; Graziani, C; Di Iorio, R; Corona, G; Bonito, M; Indraccolo, S R

    2015-07-01

    External cephalic version (ECV) for breech presentation is not routinely performed by obstetricians in many clinical settings. The aim of this work is to assess to what extent the factors involved in performing ECV are relevant for the success and safety of ECV, in order to propose a practical check-list for assessing the feasibility of ECV. Review of 214 references. Factors involved in the success and risks of ECV (feasibility of ECV) were extracted and were scored in a semi-quantitative way according to textual information, type of publication, year of publication, number of cases. Simple conjoint analysis was used to describe the relevance found for each factor. Parity has the pivotal role in ECV feasibility (relevance 16.6%), followed by tocolysis (10.8%), gestational age (10.6%), amniotic fluid volume (4.7%), breech variety (1.9%), and placenta location (1.7%). Other factors with estimated relevance around 0 (regional anesthesia, station, estimated fetal weight, fetal position, obesity/BMI, fetal birth weight, duration of manoeuvre/number of attempts) have some role in the feasibility of ECV. Yet other factors, with negative values of estimated relevance, have even less importance. From a logical interpretation of the relevance of each factor assessed, ECV should be proposed with utmost prudence if a stringent check-list is followed. Such a check-list should take into account: parity, tocolytic therapy, gestational age, amniotic fluid volume, breech variety, placenta location, regional anesthesia, breech engagement, fetal well-being, uterine relaxation, fetal size, fetal position, fetal head grasping capability and fetal turning capability.

  5. Safe, Multiphase Bounds Check Elimination in Java

    DTIC Science & Technology

    2010-01-28

    production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds

  6. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  7. One-way quantum repeaters with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  8. Pedestal-to-Wall 3D Fluid Transport Simulations on DIII-D

    DOE PAGES

    Lore, Jeremy D.; Wolfmeister, Alexis Briesemeister; Ferraro, Nathaniel M.; ...

    2017-03-30

    The 3D fluid-plasma edge transport code EMC3-EIRENE is used to test several magnetic field models with and without plasma response against DIII-D experimental data for even and odd-parity n=3 magnetic field perturbations. The field models include ideal and extended MHD equilibria, and the vacuum approximation. Plasma response is required to reduce the stochasticity in the pedestal region for even-parity fields, however too much screening suppresses the measured splitting of the downstream T e profile. Odd-parity perturbations result in weak tearing and only small additional peaks in the downstream measurements. In this case plasma response is required to increase the sizemore » of the lobe structure. Finally, no single model is able to simultaneously reproduce the upstream and downstream characteristics for both odd and even-parity perturbations.« less

  9. Improved classical and quantum random access codes

    NASA Astrophysics Data System (ADS)

    Liabøtrø, O.

    2017-05-01

    A (quantum) random access code ((Q)RAC) is a scheme that encodes n bits into m (qu)bits such that any of the n bits can be recovered with a worst case probability p >1/2 . We generalize (Q)RACs to a scheme encoding n d -levels into m (quantum) d -levels such that any d -level can be recovered with the probability for every wrong outcome value being less than 1/d . We construct explicit solutions for all n ≤d/2m-1 d -1 . For d =2 , the constructions coincide with those previously known. We show that the (Q)RACs are d -parity oblivious, generalizing ordinary parity obliviousness. We further investigate optimization of the success probabilities. For d =2 , we use the measure operators of the previously best-known solutions, but improve the encoding states to give a higher success probability. We conjecture that for maximal (n =4m-1 ,m ,p ) QRACs, p =1/2 {1 +[(√{3}+1)m-1 ] -1} is possible, and show that it is an upper bound for the measure operators that we use. We then compare (n ,m ,pq) QRACs with classical (n ,2 m ,pc) RACs. We can always find pq≥pc , but the classical code gives information about every input bit simultaneously, while the QRAC only gives information about a subset. For several different (n ,2 ,p ) QRACs, we see the same trade-off, as the best p values are obtained when the number of bits that can be obtained simultaneously is as small as possible. The trade-off is connected to parity obliviousness, since high certainty information about several bits can be used to calculate probabilities for parities of subsets.

  10. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  11. Two-dimensional solitons in conservative and parity-time-symmetric triple-core waveguides with cubic-quintic nonlinearity

    NASA Astrophysics Data System (ADS)

    Feijoo, David; Zezyulin, Dmitry A.; Konotop, Vladimir V.

    2015-12-01

    We analyze a system of three two-dimensional nonlinear Schrödinger equations coupled by linear terms and with the cubic-quintic (focusing-defocusing) nonlinearity. We consider two versions of the model: conservative and parity-time (PT ) symmetric. These models describe triple-core nonlinear optical waveguides, with balanced gain and losses in the PT -symmetric case. We obtain families of soliton solutions and discuss their stability. The latter study is performed using a linear stability analysis and checked with direct numerical simulations of the evolutional system of equations. Stable solitons are found in the conservative and PT -symmetric cases. Interactions and collisions between the conservative and PT -symmetric solitons are briefly investigated, as well.

  12. Nonlocal hyperconcentration on entangled photons using photonic module system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Cong; Wang, Tie-Jun; Mi, Si-Chen

    Entanglement distribution will inevitably be affected by the channel and environment noise. Thus distillation of maximal entanglement nonlocally becomes a crucial goal in quantum information. Here we illustrate that maximal hyperentanglement on nonlocal photons could be distilled using the photonic module and cavity quantum electrodynamics, where the photons are simultaneously entangled in polarization and spatial-mode degrees of freedom. The construction of the photonic module in a photonic band-gap structure is presented, and the operation of the module is utilized to implement the photonic nondestructive parity checks on the two degrees of freedom. We first propose a hyperconcentration protocol using twomore » identical partially hyperentangled initial states with unknown coefficients to distill a maximally hyperentangled state probabilistically, and further propose a protocol by the assistance of an ancillary single photon prepared according to the known coefficients of the initial state. In the two protocols, the total success probability can be improved greatly by introducing the iteration mechanism, and only one of the remote parties is required to perform the parity checks in each round of iteration. Estimates on the system requirements and recent experimental results indicate that our proposal is realizable with existing or near-further technologies.« less

  13. Hybrid RAID With Dual Control Architecture for SSD Reliability

    NASA Astrophysics Data System (ADS)

    Chatterjee, Santanu

    2010-10-01

    The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.

  14. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  15. Implementation of mental health parity: lessons from california.

    PubMed

    Rosenbach, Margo L; Lake, Timothy K; Williams, Susan R; Buck, Jeffrey A

    2009-12-01

    This article reports the experiences of health plans, providers, and consumers with California's mental health parity law and discusses implications for implementation of the 2008 federal parity law. This study used a multimodal data collection approach to assess the first five years of California's parity implementation (from 2000 to 2005). Telephone interviews were conducted with 68 state-level stakeholders, and in-person interviews were conducted with 77 community-based stakeholders. Six focus groups included 52 providers, and six included 32 consumers. A semistructured interview protocol was used. Interview notes and transcripts were coded to facilitate analysis. Health plans eliminated differential benefit limits and cost-sharing requirements for certain mental disorders to comply with the law, and they used managed care to control costs. In response to concerns about access to and quality of care, the state expanded oversight of health plans, issuing access-to-care regulations and conducting focused studies. California's parity law applied to a limited list of psychiatric diagnoses. Health plan executives said they spent considerable resources clarifying which diagnoses were covered at parity levels and concluded that the limited diagnosis list was unnecessary with managed care. Providers indicated that the diagnosis list had unintended consequences, including incentives to assign a more severe diagnosis that would be covered at parity levels, rather than a less severe diagnosis that would not be covered at such levels. The lack of consumer knowledge about parity was widely acknowledged, and consumers in the focus groups requested additional information about parity. Experiences in California suggest that implementation of the 2008 federal parity law should include monitoring health plan performance related to access and quality, in addition to monitoring coverage and costs; examining the breadth of diagnoses covered by health plans; and mounting a campaign to educate consumers about their insurance benefits.

  16. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  17. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  18. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  19. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  20. Bad-good constraints on a polarity correspondence account for the spatial-numerical association of response codes (SNARC) and markedness association of response codes (MARC) effects.

    PubMed

    Leth-Steensen, Craig; Citta, Richie

    2016-01-01

    Performance in numerical classification tasks involving either parity or magnitude judgements is quicker when small numbers are mapped onto a left-sided response and large numbers onto a right-sided response than for the opposite mapping (i.e., the spatial-numerical association of response codes or SNARC effect). Recent research by Gevers et al. [Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number-space interactions. Journal of Experimental Psychology: General, 139, 180-190] suggests that this effect also arises for vocal "left" and "right" responding, indicating that verbal-spatial coding has a role to play in determining it. Another presumably verbal-based, spatial-numerical mapping phenomenon is the linguistic markedness association of response codes (MARC) effect whereby responding in parity tasks is quicker when odd numbers are mapped onto left-sided responses and even numbers onto right-sided responses. A recent account of both the SNARC and MARC effects is based on the polarity correspondence principle [Proctor, R. W., & Cho, Y. S. (2006). Polarity correspondence: A general principle for performance of speeded binary classification tasks. Psychological Bulletin, 132, 416-442]. This account assumes that stimulus and response alternatives are coded along any number of dimensions in terms of - and + polarities with quicker responding when the polarity codes for the stimulus and the response correspond. In the present study, even-odd parity judgements were made using either "left" and "right" or "bad" and "good" vocal responses. Results indicated that a SNARC effect was indeed present for the former type of vocal responding, providing further evidence for the sufficiency of the verbal-spatial coding account for this effect. However, the decided lack of an analogous SNARC-like effect in the results for the latter type of vocal responding provides an important constraint on the presumed generality of the polarity correspondence account. On the other hand, the presence of robust MARC effects for "bad" and "good" but not "left" and "right" vocal responses is consistent with the view that such effects are due to conceptual associations between semantic codes for odd-even and bad-good (but not necessarily left-right).

  1. Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.

    PubMed

    Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C

    2013-12-30

    We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.

  2. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  3. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  4. Ptolemy Coding Style

    DTIC Science & Technology

    2014-09-05

    shell script that checks Java code and prints out an alphabetical list of unrec- ognized spellings. It properly handles namesWithEmbeddedCapitalization...local/bin/ispell. To run this script, type $PTII/util/testsuite/ptspell *.java • testsuite/chkjava is a shell script for checking various other...best if the svn:native property is set. Below is how to check the values for a file named README.txt: bash-3.2$ svn proplist README.txt Properties on

  5. Code development for ships -- A demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayyub, B.; Mansour, A.E.; White, G.

    1996-12-31

    A demonstration summary of a reliability-based structural design code for ships is presented for two ship types, a cruiser and a tanker. For both ship types, code requirements cover four failure modes: hull girder bulking, unstiffened plate yielding and buckling, stiffened plate buckling, and fatigue of critical detail. Both serviceability and ultimate limit states are considered. Because of limitation on the length, only hull girder modes are presented in this paper. Code requirements for other modes will be presented in future publication. A specific provision of the code will be a safety check expression. The design variables are to bemore » taken at their nominal values, typically values in the safe side of the respective distributions. Other safety check expressions for hull girder failure that include load combination factors, as well as consequence of failure factors, are considered. This paper provides a summary of safety check expressions for the hull girder modes.« less

  6. Advanced Dynamic Anthropomorphic Manikin (ADAM) Final Design Report

    DTIC Science & Technology

    1990-03-01

    an ejection sequence, the human body is subjected to numerous dynamic loadings from the catapult and sustaining rocket, as well as from wind blast. In...ML12S33 CMF’I.B #1.33, (A6) Selection = 3? BNC.S MU2S34 No - check for 4 LEA.L F’ARMSG,A2 Else display Parity prompt MOVC..W PARCT,D2 BSFR DI.; PMSG CLR.W

  7. Replacing the CCSDS Telecommand Protocol with the Next Generation Uplink (NGU)

    NASA Technical Reports Server (NTRS)

    Kazz, Greg J.; Greenberg, Ed; Burleigh, Scott C.

    2012-01-01

    The current CCSDS Telecommand (TC) Recommendations 1-3 have essentially been in use since the early 1960s. The purpose of this paper is to propose a successor protocol to TC. The current CCSDS recommendations can only accommodate telecommand rates up to approximately 1 mbit/s. However today's spacecraft are storehouses for software including software for Field Programmable Gate Arrays (FPGA) which are rapidly replacing unique hardware systems. Changes to flight software occasionally require uplinks to deliver very large volumes of data. In the opposite direction, high rate downlink missions that use acknowledged CCSDS File Delivery Protocol (CFDP)4 will increase the uplink data rate requirements. It is calculated that a 5 mbits/s downlink could saturate a 4 kbits/s uplink with CFDP downlink responses: negative acknowledgements (NAKs), FINISHs, End-of-File (EOF), Acknowledgements (ACKs). Moreover, it is anticipated that uplink rates of 10 to 20 mbits/s will be required to support manned missions. The current TC recommendations cannot meet these new demands. Specifically, they are very tightly coupled to the Bose-Chaudhuri-Hocquenghem (BCH) code in Ref. 2. This protocol requires that an uncorrectable BCH codeword delimit the TC frame and terminate the randomization process. This method greatly limits telecom performance since only the BCH code can support the protocol. More modern techniques such as the CCSDS Low Density Parity Check (LDPC)5 codes can provide a minimum performance gain of up to 6 times higher command data rates as long as sufficient power is available in the data. This paper will describe the proposed protocol format, trade-offs, and advantages offered, along with a discussion of how reliable communications takes place at higher nominal rates.

  8. A-7 Aloft Demonstration Flight Test Plan

    DTIC Science & Technology

    1975-09-01

    6095979 72A2130 Power Supply 12 VDC 6095681 72A29 NOC 72A30 ALOFT ASCU Adapter Set 72A3100 ALOFT ASCU Adapter L20-249-1 72A3110 Page assy L Bay and ASCU ...checks will also be performed for each of the following: 3.1.2.1.1 ASCU Codes. Verification will be made that all legal ASCU codes are recognized and...invalid codes inhibit attack mode. A check will also be made to verify that the ASCU codes for pilot-option weapons A-25 enable the retarded weapons

  9. DVB-S2 Experiment over NASA's Space Network

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Evans, Michael A.; Tollis, Nicholas S.

    2017-01-01

    The commercial DVB-S2 standard was successfully demonstrated over NASAs Space Network (SN) and the Tracking Data and Relay Satellite System (TDRSS) during testing conducted September 20-22nd, 2016. This test was a joint effort between NASA Glenn Research Center (GRC) and Goddard Space Flight Center (GSFC) to evaluate the performance of DVB-S2 as an alternative to traditional NASA SN waveforms. Two distinct sets of tests were conducted: one was sourced from the Space Communication and Navigation (SCaN) Testbed, an external payload on the International Space Station, and the other was sourced from GRCs S-band ground station to emulate a Space Network user through TDRSS. In both cases, a commercial off-the-shelf (COTS) receiver made by Newtec was used to receive the signal at White Sands Complex. Using SCaN Testbed, peak data rates of 5.7 Mbps were demonstrated. Peak data rates of 33 Mbps were demonstrated over the GRC S-band ground station through a 10MHz channel over TDRSS, using 32-amplitude phase shift keying (APSK) and a rate 89 low density parity check (LDPC) code. Advanced features of the DVB-S2 standard were evaluated, including variable and adaptive coding and modulation (VCMACM), as well as an adaptive digital pre-distortion (DPD) algorithm. These features provided additional data throughput and increased link performance reliability. This testing has shown that commercial standards are a viable, low-cost alternative for future Space Network users.

  10. Prototype data terminal: Multiplexer/demultiplexer

    NASA Technical Reports Server (NTRS)

    Leck, D. E.; Goodwin, J. E.

    1972-01-01

    The design and operation of a quad redundant data terminal and a multiplexer/demultiplexer (MDU) design are described. The most unique feature is the design of the quad redundant data terminal. This is one of the few designs where the unit is fail/op, fail/op, fail/safe. Laboratory tests confirm that the unit will operate satisfactorily with the failure of three out of four channels. Although the design utilizes state-of-the-art technology. The waveform error checks, the voting techniques, and the parity bit checks are believed to be used in unique configurations. Correct word selection routines are also novel, if not unique. The MDU design, while not redundant, utilizes, the latest state-of-the-art advantages of light couplers and integrated circuit amplifiers.

  11. English-Afrikaans Intrasentential Code Switching: Testing a Feature Checking Account

    ERIC Educational Resources Information Center

    van Dulm, Ondene

    2009-01-01

    The work presented here aims to account for the structure of intrasentential code switching between English and Afrikaans within the framework of feature checking theory, a theory associated with minimalist syntax. Six constructions in which verb position differs between English and Afrikaans were analysed in terms of differences in the strength…

  12. Baryon currents in QCD with compact dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucini, B.; Patella, A.; Istituto Nazionale Fisica Nucleare Sezione di Pisa, Largo Pontecorvo 3, 56126 Pisa

    2007-06-15

    On a compact space with nontrivial cycles, for sufficiently small values of the radii of the compact dimensions, SU(N) gauge theories coupled with fermions in the fundamental representation spontaneously break charge conjugation, time reversal, and parity. We show at one loop in perturbation theory that a physical signature for this phenomenon is a nonzero baryonic current wrapping around the compact directions. The persistence of this current beyond the perturbative regime is checked by lattice simulations.

  13. A high-performance Fortran code to calculate spin- and parity-dependent nuclear level densities

    NASA Astrophysics Data System (ADS)

    Sen'kov, R. A.; Horoi, M.; Zelevinsky, V. G.

    2013-01-01

    A high-performance Fortran code is developed to calculate the spin- and parity-dependent shell model nuclear level densities. The algorithm is based on the extension of methods of statistical spectroscopy and implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The proton-neutron formalism is used. We have applied the method for calculating the level densities for a set of nuclei in the sd-, pf-, and pf+g- model spaces. Examples of the calculations for 28Si (in the sd-model space) and 64Ge (in the pf+g-model space) are presented. To illustrate the power of the method we estimate the ground state energy of 64Ge in the larger model space pf+g, which is not accessible to direct shell model diagonalization due to the prohibitively large dimension, by comparing with the nuclear level densities at low excitation energy calculated in the smaller model space pf. Program summaryProgram title: MM Catalogue identifier: AENM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 193181 No. of bytes in distributed program, including test data, etc.: 1298585 Distribution format: tar.gz Programming language: Fortran 90, MPI. Computer: Any architecture with a Fortran 90 compiler and MPI. Operating system: Linux. RAM: Proportional to the system size, in our examples, up to 75Mb Classification: 17.15. External routines: MPICH2 (http://www.mcs.anl.gov/research/projects/mpich2/) Nature of problem: Calculating of the spin- and parity-dependent nuclear level density. Solution method: The algorithm implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The code is parallelized using the Message Passing Interface and a master-slaves dynamical load-balancing approach. Restrictions: The program uses two-body interaction in a restricted single-level basis. For example, GXPF1A in the pf-valence space. Running time: Depends on the system size and the number of processors used (from 1 min to several hours).

  14. An Information Theory-Inspired Strategy for Design of Re-programmable Encrypted Graphene-based Coding Metasurfaces at Terahertz Frequencies.

    PubMed

    Momeni, Ali; Rouhi, Kasra; Rajabalipanah, Hamid; Abdolali, Ali

    2018-04-18

    Inspired by the information theory, a new concept of re-programmable encrypted graphene-based coding metasurfaces was investigated at terahertz frequencies. A channel-coding function was proposed to convolutionally record an arbitrary information message onto unrecognizable but recoverable parity beams generated by a phase-encrypted coding metasurface. A single graphene-based reflective cell with dual-mode biasing voltages was designed to act as "0" and "1" meta-atoms, providing broadband opposite reflection phases. By exploiting graphene tunability, the proposed scheme enabled an unprecedented degree of freedom in the real-time mapping of information messages onto multiple parity beams which could not be damaged, altered, and reverse-engineered. Various encryption types such as mirroring, anomalous reflection, multi-beam generation, and scattering diffusion can be dynamically attained via our multifunctional metasurface. Besides, contrary to conventional time-consuming and optimization-based methods, this paper convincingly offers a fast, straightforward, and efficient design of diffusion metasurfaces of arbitrarily large size. Rigorous full-wave simulations corroborated the results where the phase-encrypted metasurfaces exhibited a polarization-insensitive reflectivity less than -10 dB over a broadband frequency range from 1 THz to 1.7 THz. This work reveals new opportunities for the extension of re-programmable THz-coding metasurfaces and may be of interest for reflection-type security systems, computational imaging, and camouflage technology.

  15. Utilizing photon number parity measurements to demonstrate quantum computation with cat-states in a cavity

    NASA Astrophysics Data System (ADS)

    Petrenko, A.; Ofek, N.; Vlastakis, B.; Sun, L.; Leghtas, Z.; Heeres, R.; Sliwa, K. M.; Mirrahimi, M.; Jiang, L.; Devoret, M. H.; Schoelkopf, R. J.

    2015-03-01

    Realizing a working quantum computer requires overcoming the many challenges that come with coupling large numbers of qubits to perform logical operations. These include improving coherence times, achieving high gate fidelities, and correcting for the inevitable errors that will occur throughout the duration of an algorithm. While impressive progress has been made in all of these areas, the difficulty of combining these ingredients to demonstrate an error-protected logical qubit, comprised of many physical qubits, still remains formidable. With its large Hilbert space, superior coherence properties, and single dominant error channel (single photon loss), a superconducting 3D resonator acting as a resource for a quantum memory offers a hardware-efficient alternative to multi-qubit codes [Leghtas et.al. PRL 2013]. Here we build upon recent work on cat-state encoding [Vlastakis et.al. Science 2013] and photon-parity jumps [Sun et.al. 2014] by exploring the effects of sequential measurements on a cavity state. Employing a transmon qubit dispersively coupled to two superconducting resonators in a cQED architecture, we explore further the application of parity measurements to characterizing such a hybrid qubit/cat state architecture. In so doing, we demonstrate the promise of integrating cat states as central constituents of future quantum codes.

  16. Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin

    2005-02-01

    This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.

  17. Miscoding and other user errors: importance of ongoing education for proper blood glucose monitoring procedures.

    PubMed

    Schrock, Linda E

    2008-07-01

    This article reviews the literature to date and reports on a new study that documented the frequency of manual code-requiring blood glucose (BG) meters that were miscoded at the time of the patient's initial appointment in a hospital-based outpatient diabetes education program. Between January 1 and May 31, 2007, the type of BG meter and the accuracy of the patient's meter code (if required) and procedure for checking BG were checked during the initial appointment with the outpatient diabetes educator. If indicated, reeducation regarding the procedure for the BG meter code entry and/or BG test was provided. Of the 65 patients who brought their meter requiring manual entry of a code number or code chip to the initial appointment, 16 (25%) were miscoded at the time of the appointment. Two additional problems, one of dead batteries and one of improperly stored test strips, were identified and corrected at the first appointment. These findings underscore the importance of checking the patient's BG meter code (if required) and procedure for testing BG at each encounter with a health care professional or providing the patient with a meter that does not require manual entry of a code number or chip to match the container of test strips (i.e., an autocode meter).

  18. An Empirical Analysis of the Cascade Secret Key Reconciliation Protocol for Quantum Key Distribution

    DTIC Science & Technology

    2011-09-01

    performance with the parity checks within each pass increasing and as a result, the processing time is expected to increase as well. A conclusion is drawn... timely manner has driven efforts to develop new key distribution methods. The most promising method is Quantum Key Distribution (QKD) and is...thank the QKD Project Team for all of the insight and support they provided in such a short time period. Thanks are especially in order for my

  19. Implicit and Explicit Number-Space Associations Differentially Relate to Interference Control in Young Adults With ADHD

    PubMed Central

    Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine

    2018-01-01

    Behavioral evidence for the link between numerical and spatial representations comes from the spatial-numerical association of response codes (SNARC) effect, consisting in faster reaction times to small/large numbers with the left/right hand respectively. The SNARC effect is, however, characterized by considerable intra- and inter-individual variability. It depends not only on the explicit or implicit nature of the numerical task, but also relates to interference control. To determine whether the prevalence of the latter relation in the elderly could be ascribed to younger individuals’ ceiling performances on executive control tasks, we determined whether the SNARC effect related to Stroop and/or Flanker effects in 26 young adults with ADHD. We observed a divergent pattern of correlation depending on the type of numerical task used to assess the SNARC effect and the type of interference control measure involved in number-space associations. Namely, stronger number-space associations during parity judgments involving implicit magnitude processing related to weaker interference control in the Stroop but not Flanker task. Conversely, stronger number-space associations during explicit magnitude classifications tended to be associated with better interference control in the Flanker but not Stroop paradigm. The association of stronger parity and magnitude SNARC effects with weaker and better interference control respectively indicates that different mechanisms underlie these relations. Activation of the magnitude-associated spatial code is irrelevant and potentially interferes with parity judgments, but in contrast assists explicit magnitude classifications. Altogether, the present study confirms the contribution of interference control to number-space associations also in young adults. It suggests that magnitude-associated spatial codes in implicit and explicit tasks are monitored by different interference control mechanisms, thereby explaining task-related intra-individual differences in number-space associations. PMID:29881363

  20. A Semiautomated Journal Check-In and Binding System; or Variations on a Common Theme

    PubMed Central

    Livingston, Frances G.

    1967-01-01

    The journal check-in project described here, though based on a computerized system, uses only unit-record equipment and is designed for the medium-sized library. The frequency codes used are based on the date printed on the journal rather than on the expected date of receipt, which allows for more stability in the coding scheme. The journal's volume number and issue number, which in other systems are usually predetermined by a computer, are inserted at the time of check-in. Routine claiming of overdue issues and a systematic binding schedule have also been developed as by-products. PMID:6041836

  1. Evaluating Independently Licensed Counselors' Articulation of Professional Identity Using Structural Coding

    ERIC Educational Resources Information Center

    Burns, Stephanie; Cruikshanks, Daniel R.

    2017-01-01

    Inconsistent counselor professional identity contributes to issues with licensure portability, parity in hiring practices, marketplace recognition in U.S. society and third-party payments for independently licensed counselors. Counselors could benefit from enhancing the counseling profession's identity as well as individual professional identities…

  2. Efficient Type Representation in TAL

    NASA Technical Reports Server (NTRS)

    Chen, Juan

    2009-01-01

    Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time.

  3. Design and implementation of the next generation Landsat satellite communications system

    USGS Publications Warehouse

    Mah, Grant R.; O'Brien, Michael; Garon, Howard; Mott, Claire; Ames, Alan; Dearth, Ken

    2012-01-01

    The next generation Landsat satellite, Landsat 8 (L8), also known as the Landsat Data Continuity Mission (LDCM), uses a highly spectrally efficient modulation and data formatting approach to provide large amounts of downlink (D/L) bandwidth in a limited X-Band spectrum allocation. In addition to purely data throughput and bandwidth considerations, there were a number of additional constraints based on operational considerations for prevention of interference with the NASA Deep-Space Network (DSN) band just above the L8 D/L band, minimization of jitter contributions to prevent impacts to instrument performance, and the need to provide an interface to the Landsat International Cooperator (IC) community. A series of trade studies were conducted to consider either X- or Ka-Band, modulation type, and antenna coverage type, prior to the release of the request for proposal (RFP) for the spacecraft. Through use of the spectrally efficient rate-7/8 Low-Density Parity-Check error-correction coding and novel filtering, an XBand frequency plan was developed that balances all the constraints and considerations, while providing world-class link performance, fitting 384 Mbits/sec of data into the 375 MHz X-Band allocation with bit-error rates better than 10-12 using an earth-coverage antenna.

  4. The Essential Component in DNA-Based Information Storage System: Robust Error-Tolerating Module

    PubMed Central

    Yim, Aldrin Kay-Yuen; Yu, Allen Chi-Shing; Li, Jing-Woei; Wong, Ada In-Chun; Loo, Jacky F. C.; Chan, King Ming; Kong, S. K.; Yip, Kevin Y.; Chan, Ting-Fung

    2014-01-01

    The size of digital data is ever increasing and is expected to grow to 40,000 EB by 2020, yet the estimated global information storage capacity in 2011 is <300 EB, indicating that most of the data are transient. DNA, as a very stable nano-molecule, is an ideal massive storage device for long-term data archive. The two most notable illustrations are from Church et al. and Goldman et al., whose approaches are well-optimized for most sequencing platforms – short synthesized DNA fragments without homopolymer. Here, we suggested improvements on error handling methodology that could enable the integration of DNA-based computational process, e.g., algorithms based on self-assembly of DNA. As a proof of concept, a picture of size 438 bytes was encoded to DNA with low-density parity-check error-correction code. We salvaged a significant portion of sequencing reads with mutations generated during DNA synthesis and sequencing and successfully reconstructed the entire picture. A modular-based programing framework – DNAcodec with an eXtensible Markup Language-based data format was also introduced. Our experiments demonstrated the practicability of long DNA message recovery with high error tolerance, which opens the field to biocomputing and synthetic biology. PMID:25414846

  5. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  6. Arbitrary numbers counter fair decisions: trails of markedness in card distribution.

    PubMed

    Schroeder, Philipp A; Pfister, Roland

    2015-01-01

    Converging evidence from controlled experiments suggests that the mere processing of a number and its attributes such as value or parity might affect free choice decisions between different actions. For example the spatial numerical associations of response codes (SNARC) effect indicates the magnitude of a digit to be associated with a spatial representation and might therefore affect spatial response choices (i.e., decisions between a "left" and a "right" option). At the same time, other (linguistic) features of a number such as parity are embedded into space and might likewise prime left or right responses through feature words [odd or even, respectively; markedness association of response codes (MARC) effect]. In this experiment we aimed at documenting such influences in a natural setting. We therefore assessed number-space and parity-space association effects by exposing participants to a fair distribution task in a card playing scenario. Participants drew cards, read out loud their number values, and announced their response choice, i.e., dealing it to a left vs. right player, indicated by Playmobil characters. Not only did participants prefer to deal more cards to the right player, the card's digits also affected response choices and led to a slightly but systematically unfair distribution, supported by a regular SNARC effect and counteracted by a reversed MARC effect. The experiment demonstrates the impact of SNARC- and MARC-like biases in free choice behavior through verbal and visual numerical information processing even in a setting with high external validity.

  7. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  8. Fault-tolerance in Two-dimensional Topological Systems

    NASA Astrophysics Data System (ADS)

    Anderson, Jonas T.

    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev's surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates.

  9. Design and reliability analysis of high-speed and continuous data recording system based on disk array

    NASA Astrophysics Data System (ADS)

    Jiang, Changlong; Ma, Cheng; He, Ning; Zhang, Xugang; Wang, Chongyang; Jia, Huibo

    2002-12-01

    In many real-time fields the sustained high-speed data recording system is required. This paper proposes a high-speed and sustained data recording system based on the complex-RAID 3+0. The system consists of Array Controller Module (ACM), String Controller Module (SCM) and Main Controller Module (MCM). ACM implemented by an FPGA chip is used to split the high-speed incoming data stream into several lower-speed streams and generate one parity code stream synchronously. It also can inversely recover the original data stream while reading. SCMs record lower-speed streams from the ACM into the SCSI disk drivers. In the SCM, the dual-page buffer technology is adopted to implement speed-matching function and satisfy the need of sustainable recording. MCM monitors the whole system, controls ACM and SCMs to realize the data stripping, reconstruction, and recovery functions. The method of how to determine the system scale is presented. At the end, two new ways Floating Parity Group (FPG) and full 2D-Parity Group (full 2D-PG) are proposed to improve the system reliability and compared with the Traditional Parity Group (TPG). This recording system can be used conveniently in many areas of data recording, storing, playback and remote backup with its high-reliability.

  10. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  11. A remote monitoring system for patients with implantable ventricular assist devices with a personal handy phone system.

    PubMed

    Okamoto, E; Shimanaka, M; Suzuki, S; Baba, K; Mitamura, Y

    1999-01-01

    The usefulness of a remote monitoring system that uses a personal handy phone for artificial heart implanted patients was investigated. The type of handy phone used in this study was a personal handy phone system (PHS), which is a system developed in Japan that uses the NTT (Nippon Telephone and Telegraph, Inc.) telephone network service. The PHS has several advantages: high-speed data transmission, low power output, little electromagnetic interference with medical devices, and easy locating of patients. In our system, patients have a mobile computer (Toshiba, Libretto 50, Kawasaki, Japan) for data transmission control between an implanted controller and a host computer (NEC, PC-9821V16) in the hospital. Information on the motor rotational angle (8 bits) and motor current (8 bits) of the implanted motor driven heart is fed into the mobile computer from the implanted controller (Hitachi, H8/532, Yokohama, Japan) according to 32-bit command codes from the host computer. Motor current and motor rotational angle data from inside the body are framed together by a control code (frame number and parity) for data error checking and correcting at the receiving site, and the data are sent through the PHS connection to the mobile computer. The host computer calculates pump outflow and arterial pressure from the motor rotational angle and motor current values and displays the data in real-time waveforms. The results of this study showed that accurate data on motor rotational angle and current could be transmitted from the subjects while they were walking or driving a car to the host computer at a data transmission rate of 9600 bps. This system is useful for remote monitoring of patients with an implanted artificial heart.

  12. ANSYS duplicate finite-element checker routine

    NASA Technical Reports Server (NTRS)

    Ortega, R.

    1995-01-01

    An ANSYS finite-element code routine to check for duplicated elements within the volume of a three-dimensional (3D) finite-element mesh was developed. The routine developed is used for checking floating elements within a mesh, identically duplicated elements, and intersecting elements with a common face. A space shuttle main engine alternate turbopump development high pressure oxidizer turbopump finite-element model check using the developed subroutine is discussed. Finally, recommendations are provided for duplicate element checking of 3D finite-element models.

  13. The spectrum of singly ionized tungsten

    NASA Astrophysics Data System (ADS)

    Husain, Abid; Jabeen, S.; Wajid, Abdul

    2018-05-01

    The ab initio calculations were performed using Cowan's computer code for ground configuration5d46s incorporating other interacting even parity configurations 5d36s2 and 5d5, also for the three lowest excited configurations5d46p, 5d36s6p and 5d36s5f of odd parity matrix. The initial energy parameter scaling applied for Eav and ζ at 100% of the HFR values and Fk at 85%, Gk and Rk at 75% of the HFR values. The reported values of levels were taken from NIST ASD levels list. The levels were used to run least square fitted (LSF). This allowed adjusting the energy to the real values and hence a better prediction was achieved.

  14. Security in Active Networks

    DTIC Science & Technology

    1999-01-01

    Some means currently under investigation include domain-speci c languages which are easy to check (e.g., PLAN), proof-carrying code [NL96, Nec97...domain-speci c language coupled to an extension system with heavyweight checks. In this way, the frequent (per- packet) dynamic checks are inexpensive...to CISC architectures remains problematic. Typed assembly language [MWCG98] propagates type safety information to the assembly language level, so

  15. Computer codes for checking, plotting and processing of neutron cross-section covariance data and their application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sartori, E.; Roussin, R.W.

    This paper presents a brief review of computer codes concerned with checking, plotting, processing and using of covariances of neutron cross-section data. It concentrates on those available from the computer code information centers of the United States and the OECD/Nuclear Energy Agency. Emphasis will be placed also on codes using covariances for specific applications such as uncertainty analysis, data adjustment and data consistency analysis. Recent evaluations contain neutron cross section covariance information for all isotopes of major importance for technological applications of nuclear energy. It is therefore important that the available software tools needed for taking advantage of this informationmore » are widely known as hey permit the determination of better safety margins and allow the optimization of more economic, I designs of nuclear energy systems.« less

  16. Performance of Compiler-Assisted Memory Safety Checking

    DTIC Science & Technology

    2014-08-01

    software developer has in mind a particular object to which the pointer should point, the intended referent. A memory access error occurs when an ac...Performance of Compiler-Assisted Memory Safety Checking David Keaton Robert C. Seacord August 2014 TECHNICAL NOTE CMU/SEI-2014-TN...based memory safety checking tool and the performance that can be achieved with two such tools whose source code is freely available. The note then

  17. Intelligent Data Visualization for Cross-Checking Spacecraft System Diagnosis

    NASA Technical Reports Server (NTRS)

    Ong, James C.; Remolina, Emilio; Breeden, David; Stroozas, Brett A.; Mohammed, John L.

    2012-01-01

    Any reasoning system is fallible, so crew members and flight controllers must be able to cross-check automated diagnoses of spacecraft or habitat problems by considering alternate diagnoses and analyzing related evidence. Cross-checking improves diagnostic accuracy because people can apply information processing heuristics, pattern recognition techniques, and reasoning methods that the automated diagnostic system may not possess. Over time, cross-checking also enables crew members to become comfortable with how the diagnostic reasoning system performs, so the system can earn the crew s trust. We developed intelligent data visualization software that helps users cross-check automated diagnoses of system faults more effectively. The user interface displays scrollable arrays of timelines and time-series graphs, which are tightly integrated with an interactive, color-coded system schematic to show important spatial-temporal data patterns. Signal processing and rule-based diagnostic reasoning automatically identify alternate hypotheses and data patterns that support or rebut the original and alternate diagnoses. A color-coded matrix display summarizes the supporting or rebutting evidence for each diagnosis, and a drill-down capability enables crew members to quickly view graphs and timelines of the underlying data. This system demonstrates that modest amounts of diagnostic reasoning, combined with interactive, information-dense data visualizations, can accelerate system diagnosis and cross-checking.

  18. Atomic entanglement purification and concentration using coherent state input-output process in low-Q cavity QED regime.

    PubMed

    Cao, Cong; Wang, Chuan; He, Ling-Yan; Zhang, Ru

    2013-02-25

    We investigate an atomic entanglement purification protocol based on the coherent state input-output process by working in low-Q cavity in the atom-cavity intermediate coupling region. The information of entangled states are encoded in three-level configured single atoms confined in separated one-side optical micro-cavities. Using the coherent state input-output process, we design a two-qubit parity check module (PCM), which allows the quantum nondemolition measurement for the atomic qubits, and show its use for remote parities to distill a high-fidelity atomic entangled ensemble from an initial mixed state ensemble nonlocally. The proposed scheme can further be used for unknown atomic states entanglement concentration. Also by exploiting the PCM, we describe a modified scheme for atomic entanglement concentration by introducing ancillary single atoms. As the coherent state input-output process is robust and scalable in realistic applications, and the detection in the PCM is based on the intensity of outgoing coherent state, the present protocols may be widely used in large-scaled and solid-based quantum repeater and quantum information processing.

  19. Incidence and factors associated with early pregnancy losses in Simmental dairy cows.

    PubMed

    Zobel, R; Tkalčić, S; Pipal, I; Buić, V

    2011-09-01

    It has been suggested that management system, milk yield, parity, body condition score and ambient temperature can significantly influence the rate of early pregnancy loss in dairy cattle. The objectives of this study were to establish the extent and patterns of early pregnancy loss from days 32 to 86 of gestation, and to check relationships between management system, milk yield, ambient temperature (quartile), body condition score, bull and parity on the early pregnancy loss rate for Simmental dairy cattle in Croatia. Animals were housed in two dairy farms with two different management systems (pasture based-group A, n=435 and intensive-group B, n=425) with a total of 151 heifers and 709 cows. Overall pregnancy losses were recorded in 67 (7.79%) animals, with late embryonic losses in 30 (44.77%) and early fetal losses in 37 (55.23%) animals (P>0.05). Early pregnancy losses were twofold higher in group B when compared to the group A (P<0.05). More than the half of pregnancy losses were recorded during the III quartile (P<0.05). There was no significant relationship between the paternal bull and pregnancy loss rate. Low body condition score (BCS 2-3) was associated with the highest, while BCS 3.25-4 showed the lowest pregnancy loss rate (P<0.05). The pregnancy loss rate increased in parallel with parity and milk yield increase. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry

    DTIC Science & Technology

    2014-05-29

    its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the

  1. Topological Qubits from Valence Bond Solids

    NASA Astrophysics Data System (ADS)

    Wang, Dong-Sheng; Affleck, Ian; Raussendorf, Robert

    2018-05-01

    Topological qubits based on S U (N )-symmetric valence-bond solid models are constructed. A logical topological qubit is the ground subspace with twofold degeneracy, which is due to the spontaneous breaking of a global parity symmetry. A logical Z rotation by an angle 2 π /N , for any integer N >2 , is provided by a global twist operation, which is of a topological nature and protected by the energy gap. A general concatenation scheme with standard quantum error-correction codes is also proposed, which can lead to better codes. Generic error-correction properties of symmetry-protected topological order are also demonstrated.

  2. Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language

    NASA Astrophysics Data System (ADS)

    Madhavapeddy, Anil

    Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.

  3. Telepharmacy and bar-code technology in an i.v. chemotherapy admixture area.

    PubMed

    O'Neal, Brian C; Worden, John C; Couldry, Rick J

    2009-07-01

    A program using telepharmacy and bar-code technology to increase the presence of the pharmacist at a critical risk point during chemotherapy preparation is described. Telepharmacy hardware and software were acquired, and an inspection camera was placed in a biological safety cabinet to allow the pharmacy technician to take digital photographs at various stages of the chemotherapy preparation process. Once the pharmacist checks the medication vials' agreement with the work label, the technician takes the product into the biological safety cabinet, where the appropriate patient is selected from the pending work list, a queue of patient orders sent from the pharmacy information system. The technician then scans the bar code on the vial. Assuming the bar code matches, the technician photographs the work label, vials, diluents and fluids to be used, and the syringe (before injecting the contents into the bag) along with the vial. The pharmacist views all images as a part of the final product-checking process. This process allows the pharmacist to verify that the correct quantity of medication was transferred from the primary source to a secondary container without being physically present at the time of transfer. Telepharmacy and bar coding provide a means to improve the accuracy of chemotherapy preparation by decreasing the likelihood of using the incorrect product or quantity of drug. The system facilitates the reading of small product labels and removes the need for a pharmacist to handle contaminated syringes and vials when checking the final product.

  4. Towards a Certified Lightweight Array Bound Checker for Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pichardie, David

    2009-01-01

    Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.

  5. Heave and Roll Response of Free Floating Bodies of Cylindrical Shape

    DTIC Science & Technology

    1977-02-01

    27, De 1000 1 a 1,NPARTS ?8. C CH~ ECK ( IF ITS OUJT OF WATER P9 9IF (VDCI) ’I.E. 0.0) Ci’l TO 1000 30. C CHECK SHAPE. 31. Ud TO 1o0003000300...22217 Center 1 ATTN: (Code 460) Cameron Station 1 ATTN: (Code 102-OS) Alexandria, VA 22314 6 ATTN: (Code 1021P) 1 ATTN: (Code 200) Commander Naval

  6. MINIMUM CHECK LIST FOR MECHANICAL PLANS AND SPECIFICATIONS.

    ERIC Educational Resources Information Center

    PIERCE, J.L.

    THIS BULLETIN HAS BEEN PREPARED FOR USE AS A MINIMUM CHECK LIST IN THE DEVELOPMENT AND REVIEW OF MECHANICAL AND ELECTRICAL PLANS AND SPECIFICATIONS BY ENGINEERS, ARCHITECTS, AND SUPERINTENDENTS IN PLANNING PUBLIC SCHOOL FACILITIES. THREE LEVELS OF GUIDELINES ARE MENTIONED--(1) MANDATORY BECAUSE OF LAW, CODE, OR REGULATION, (2) RECOMMENDED AS MOST…

  7. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  8. Static Verification for Code Contracts

    NASA Astrophysics Data System (ADS)

    Fähndrich, Manuel

    The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.

  9. Agricultural Baseline (BL0) scenario

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinckel, Chad M [University of Tennessee] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Langholtz, Matthew H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154)

    2016-07-13

    Scientific reason for data generation: to serve as the reference case for the BT16 volume 1 agricultural scenarios. The agricultural baseline runs from 2015 through 2040; a starting year of 2014 is used. Date the data set was last modified: 02/12/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: simulation was developed without offering a farmgate price to energy crops or residues (i.e., building on both the USDA 2015 baseline and the agricultural census data (USDA NASS 2014). Data generated are .txt output files by year, simulation identifier, county code (1-3109). Instruments used: POLYSYS (version POLYS2015_V10_alt_JAN22B) supplied by the University of Tennessee APAC The quality assurance and quality control that have been applied: • Check for negative planted area, harvested area, production, yield and cost values. • Check if harvested area exceeds planted area for annuals. • Check FIPS codes.

  10. ’Offsets’ for NATO Procurement of the Airborne Warning and Control System: Opportunities and Implications

    DTIC Science & Technology

    1976-02-01

    the Anglo-French ; azel ’, the German B9715, and the Italian Ag129--all appear responsi- e to a U.S. need for an Advanced Scout -32- Helicopter (ASH...currencies by OECD using IMF parity rates, or average of annual range of flexible rates. -86- code (individual chemicals, such as sulphuric acid , etc.), it is

  11. The external kink mode in diverted tokamaks

    NASA Astrophysics Data System (ADS)

    Turnbull, A. D.; Hanson, J. M.; Turco, F.; Ferraro, N. M.; Lanctot, M. J.; Lao, L. L.; Strait, E. J.; Piovesan, P.; Martin, P.

    2016-06-01

    > . The resistive kink behaves much like the ideal kink with predominantly kink or interchange parity and no real sign of a tearing component. However, the growth rates scale with a fractional power of the resistivity near the surface. The results have a direct bearing on the conventional edge cutoff procedures used in most ideal MHD codes, as well as implications for ITER and for future reactor options.

  12. To amend the Internal Revenue Code of 1986 to extend the rule providing parity for exclusion from income for employer-provided mass transit and parking benefits.

    THOMAS, 113th Congress

    Rep. Norton, Eleanor Holmes [D-DC-At Large

    2013-12-12

    House - 12/12/2013 Referred to the House Committee on Ways and Means. (All Actions) Notes: For further action, see H.R.5771, which became Public Law 113-295 on 12/19/2014. Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  13. Physical implementation of protected qubits

    NASA Astrophysics Data System (ADS)

    Douçot, B.; Ioffe, L. B.

    2012-07-01

    We review the general notion of topological protection of quantum states in spin models and its relation with the ideas of quantum error correction. We show that topological protection can be viewed as a Hamiltonian realization of error correction: for a quantum code for which the minimal number of errors that remain undetected is N, the corresponding Hamiltonian model of the effects of the environment noise appears only in the Nth order of the perturbation theory. We discuss the simplest model Hamiltonians that realize topological protection and their implementation in superconducting arrays. We focus on two dual realizations: in one the protected state is stored in the parity of the Cooper pair number, in the other, in the parity of the flux number. In both cases the superconducting arrays allow a number of fault-tolerant operations that should make the universal quantum computation possible.

  14. Robust Deterministic Controlled Phase-Flip Gate and Controlled-Not Gate Based on Atomic Ensembles Embedded in Double-Sided Optical Cavities

    NASA Astrophysics Data System (ADS)

    Liu, A.-Peng; Cheng, Liu-Yong; Guo, Qi; Zhang, Shou

    2018-02-01

    We first propose a scheme for controlled phase-flip gate between a flying photon qubit and the collective spin wave (magnon) of an atomic ensemble assisted by double-sided cavity quantum systems. Then we propose a deterministic controlled-not gate on magnon qubits with parity-check building blocks. Both the gates can be accomplished with 100% success probability in principle. Atomic ensemble is employed so that light-matter coupling is remarkably improved by collective enhancement. We assess the performance of the gates and the results show that they can be faithfully constituted with current experimental techniques.

  15. Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.

    1991-01-01

    The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.

  16. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  17. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  18. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  19. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  20. Framework for Evaluating Loop Invariant Detection Games in Relation to Automated Dynamic Invariant Detectors

    DTIC Science & Technology

    2015-09-01

    Detectability ...............................................................................................37 Figure 20. Excel VBA Codes for Checker...National Vulnerability Database OS Operating System SQL Structured Query Language VC Verification Condition VBA Visual Basic for Applications...checks each of these assertions for detectability by Daikon. The checker is an Excel Visual Basic for Applications ( VBA ) script that checks the

  1. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  2. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  3. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  4. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  5. 26 CFR 301.6867-1 - Presumptions where owner of large amount of cash is not identified.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Revenue Code, relating to abatements, credits, and refunds, and may not institute a suit for refund in... 6532(c), relating to the 9-month statute of limitations for suits under section 7426. In addition, the...) Postage stamps; (F) Traveler's checks in any form; (G) Negotiable instruments (including personal checks...

  6. 26 CFR 301.6867-1 - Presumptions where owner of large amount of cash is not identified.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Revenue Code, relating to abatements, credits, and refunds, and may not institute a suit for refund in... 6532(c), relating to the 9-month statute of limitations for suits under section 7426. In addition, the...) Postage stamps; (F) Traveler's checks in any form; (G) Negotiable instruments (including personal checks...

  7. Abstraction Techniques for Parameterized Verification

    DTIC Science & Technology

    2006-11-01

    approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite

  8. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  9. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  10. NOAA Weather Radio

    Science.gov Websites

    Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME does not make an audible beep. However, it will display the words "CHECK RECEPTION" until it Reception Problems General Information NWR Alarms FIPS Codes Automated Voices Spanish Voices Public

  11. Suite of Benchmark Tests to Conduct Mesh-Convergence Analysis of Nonlinear and Non-constant Coefficient Transport Codes

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2014-12-01

    Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.

  12. Defining the Dormant Tumor Microenvironment for Breast Cancer Prevention and Treatment

    DTIC Science & Technology

    2011-09-01

    USAMRMC a. REPORT U b . ABSTRACT U c. THIS PAGE U UU 36 19b. TELEPHONE NUMBER (include area code) 2 TABLE OF CONTENTS...Each group will include 12 rats. b ) Breed rats in the parous group. After parturition, normalize the number of pups to eight. Ten days after...solution ultrasonication assisted tryptic digestion (UATD) method. Parity arm completed. b ) Analyzed the digested mammary ECM samples from

  13. 76 FR 71315 - Endangered and Threatened Species; Take of Anadromous Fish

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ... the fish and would then sample them for biological information (fin tissue and scale samples). They..., measured, weighed, tissue-sampled, and checked for external marks and coded-wire tags depending on the.... Then the researchers would remove and preserve fish body tissues, otoliths, and coded wire tags (from...

  14. High spin structure and intruder configurations in 31P

    NASA Astrophysics Data System (ADS)

    Ionescu-Bujor, M.; Iordachescu, A.; Napoli, D. R.; Lenzi, S. M.; Mărginean, N.; Otsuka, T.; Utsuno, Y.; Ribas, R. V.; Axiotis, M.; Bazzacco, D.; Bizzeti-Sona, A. M.; Bizzeti, P. G.; Brandolini, F.; Bucurescu, D.; Cardona, M. A.; De Angelis, G.; De Poli, M.; Della Vedova, F.; Farnea, E.; Gadea, A.; Hojman, D.; Kalfas, C. A.; Kröll, Th.; Lunardi, S.; Martínez, T.; Mason, P.; Pavan, P.; Quintana, B.; Alvarez, C. Rossi; Ur, C. A.; Vlastou, R.; Zilio, S.

    2006-02-01

    The nucleus 31P has been studied in the 24Mg(16O,2αp) reaction with a 70-MeV 16O beam. A complex level scheme extended up to spins 17/2+ and 15/2-, on positive and negative parity, respectively, has been established. Lifetimes for the new states have been investigated by the Doppler shift attenuation method. Two shell-model calculations have been performed to describe the experimental data, one by using the code ANTOINE in a valence space restricted to the sd shell, and the other by applying the Monte Carlo shell model in a valence space including the sd-fp shells. The latter calculation indicates that intruder excitations, involving the promotion of a T=0 proton-neutron pair to the fp shell, play a dominant role in the structure of the positive-parity high-spin states of 31P.

  15. Telemetry Standards, RCC Standard 106-17. Chapter 8. Digital Data Bus Acquisition Formatting Standard

    DTIC Science & Technology

    2017-07-01

    8-3 8.4.1 Characteristics of a Singular Composite Output Signal ...................................... 8-3 8.5 Single Bus Track Spread Recording ...Format .............................................................. 8-5 8.5.1 Single Bus Recording Technique Characteristics...check FCS frame check sequence HDDR high-density digital recording MIL-STD Military Standard msb most significant bit PCM pulse code modulation

  16. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose

    2018-03-01

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  17. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel Jose

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  18. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  19. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  20. Majorana fermion surface code for universal quantum computation

    DOE PAGES

    Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang

    2015-12-10

    In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z 2 topological order with a Z 2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physicalmore » ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less

  1. The use of self checks and voting in software error detection - An empirical study

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.

    1990-01-01

    The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.

  2. Agricultural Baseline (BL0) scenario of the 2016 Billion-Ton Report

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinkel, Chad [University of Tennessee, APAC] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Langholtz, Matthew H [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Myers, Aaron [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000320373827)

    2016-07-13

    Scientific reason for data generation: to serve as the reference case for the BT16 volume 1 agricultural scenarios. The agricultural baseline runs from 2015 through 2040; a starting year of 2014 is used. Date the data set was last modified: 02/12/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: simulation was developed without offering a farmgate price to energy crops or residues (i.e., building on both the USDA 2015 baseline and the agricultural census data (USDA NASS 2014). Data generated are .txt output files by year, simulation identifier, county code (1-3109). Instruments used: POLYSYS (version POLYS2015_V10_alt_JAN22B) supplied by the University of Tennessee APAC The quality assurance and quality control that have been applied: • Check for negative planted area, harvested area, production, yield and cost values. • Check if harvested area exceeds planted area for annuals. • Check FIPS codes.

  3. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons.

    PubMed

    Mertz, Marcel; Sofaer, Neema; Strech, Daniel

    2014-09-27

    The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations ("reason mentions") that were identified by the review to represent a reason in a given author's publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved?

  4. Prototype data terminal-multiplexer/demultiplexer

    NASA Technical Reports Server (NTRS)

    Leck, D. E.; Goodwin, J. E.

    1972-01-01

    The design and operation of a quad redundant data terminal and a multiplexer/demultiplexer (MDU) is described. The most unique feature is the design of the quad redundant data terminal. This is one of the few designs where the unit is fail/op, fail/op, fail/safe. Laboratory tests confirm that the unit will operate satisfactorily with the failure of three out of four channels. Although the design utilizes state-of-the-art technology, the waveform error checks, the voting techniques, and the parity bit checks are believed to be used in unique configurations. Correct word selection routines are also novel. The MDU design, while not redundant, utilizes, the latest state-of-the-art advantages of light coupler and interested amplifiers. Much of the technology employed was an evolution of prior NASA contracts related to the Addressable Time Division Data System. A good example of the earlier technology development was the development of a low level analog multiplexer, a high level analog multiplexer, and a digital multiplexer. A list of all drawings is included for reference and all schematic, block and timing diagrams are incorporated.

  5. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.

    PubMed

    Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland

    2011-04-08

    In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  6. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme

    PubMed Central

    2011-01-01

    Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors. PMID:21477310

  7. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  8. DataPlus™ - a revolutionary applications generator for DOS hand-held computers

    Treesearch

    David Dean; Linda Dean

    2000-01-01

    DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...

  9. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... codes shall be consistent with SAE J2012 “Diagnostic Trouble Code Definitions—Equivalent to ISO/DIS... sent to the scan tool over a J1850 data link shall use the Cyclic Redundancy Check and the three byte..., definitions and abbreviations shall be formatted according to SAE J1930 “Electrical/Electronic Systems...

  10. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... codes shall be consistent with SAE J2012 “Diagnostic Trouble Code Definitions—Equivalent to ISO/DIS... sent to the scan tool over a J1850 data link shall use the Cyclic Redundancy Check and the three byte..., definitions and abbreviations shall be formatted according to SAE J1930 “Electrical/Electronic Systems...

  11. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... codes shall be consistent with SAE J2012 “Diagnostic Trouble Code Definitions—Equivalent to ISO/DIS... sent to the scan tool over a J1850 data link shall use the Cyclic Redundancy Check and the three byte..., definitions and abbreviations shall be formatted according to SAE J1930 “Electrical/Electronic Systems...

  12. Effects of parity and periconceptional metabolic state of Holstein-Friesian dams on the glucose metabolism and conformation in their newborn calves.

    PubMed

    Bossaert, P; Fransen, E; Langbeen, A; Stalpaert, M; Vandenbroeck, I; Bols, P E; Leroy, J L

    2014-06-01

    The metabolic state of pregnant mammals influences the offspring's development and risk of metabolic disease in postnatal life. The metabolic state in a lactating dairy cow differs immensely from that in a non-lactating heifer around the time of conception, but consequences for their calves are poorly understood. The hypothesis of this study was that differences in metabolic state between non-lactating heifers and lactating cows during early pregnancy would affect insulin-dependent glucose metabolism and development in their neonatal calves. Using a mixed linear model, concentrations of glucose, IGF-I and non-esterified fatty acids (NEFAs) were compared between 13 non-lactating heifers and 16 high-yielding dairy cows in repeated blood samples obtained during the 1st month after successful insemination. Calves born from these dams were weighed and measured at birth, and subjected to intravenous glucose and insulin challenges between 7 and 14 days of age. Eight estimators of insulin-dependent glucose metabolism were determined: glucose and insulin peak concentration, area under the curve and elimination rate after glucose challenge, glucose reduction rate after insulin challenge, and quantitative insulin sensitivity check index. Effects of dam parity and calf sex on the metabolic and developmental traits were analysed in a two-way ANOVA. Compared with heifers, cows displayed lower glucose and IGF-I and higher NEFA concentrations during the 1st month after conception. However, these differences did not affect developmental traits and glucose homeostasis in their calves: birth weight, withers height, heart girth, and responses to glucose and insulin challenges in the calves were unaffected by their dam's parity. In conclusion, differences in the metabolic state of heifers and cows during early gestation under field conditions could not be related to their offspring's development and glucose homeostasis.

  13. Three questions you need to ask about your brand.

    PubMed

    Keller, Kevin Lane; Sternthal, Brian; Tybout, Alice

    2002-09-01

    Traditionally, the people responsible for positioning brands have concentrated on the differences that set each brand apart from the competition. But emphasizing differences isn't enough to sustain a brand against competitors. Managers should also consider the frame of reference within which the brand works and the features the brand shares with other products. Asking three questions about your brand can help: HAVE WE ESTABLISHED A FRAME?: A frame of reference--for Coke, it might be as narrow as other colas or as broad as all thirst-quenching drinks--signals to consumers the goal they can expect to achieve by using a brand. Brand managers need to pay close attention to this issue, in some cases expanding their focus in order to preempt the competition. ARE WE LEVERAGING OUR POINTS OF PARITY?: Certain points of parity must be met if consumers are to perceive your product as a legitimate player within its frame of reference. For instance, consumers might not consider a bank truly a bank unless it offers checking and savings plans. ARE THE POINTS OF DIFFERENCE COMPELLING?: A distinguishing characteristic that consumers find both relevant and believable can become a strong, favorable, unique brand association, capable of distinguishing the brand from others in the same frame of reference. Frames of reference, points of parity, and points of difference are moving targets. Maytag isn't the only dependable brand of appliance, Tide isn't the only detergent with whitening power, and BMWs aren't the only cars on the road with superior handling. The key questions you need to ask about your brand may not change, but their context certainly will. The saviest brand positioners are also the most vigilant.

  14. Greater physician involvement improves coding outcomes in endobronchial ultrasound-guided transbronchial needle aspiration procedures.

    PubMed

    Pillai, Anilkumar; Medford, Andrew R L

    2013-01-01

    Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.

  15. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  16. Certifying Domain-Specific Policies

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Pressburger, Thomas; Rosu, Grigore; Koga, Dennis (Technical Monitor)

    2001-01-01

    Proof-checking code for compliance to safety policies potentially enables a product-oriented approach to certain aspects of software certification. To date, previous research has focused on generic, low-level programming-language properties such as memory type safety. In this paper we consider proof-checking higher-level domain -specific properties for compliance to safety policies. The paper first describes a framework related to abstract interpretation in which compliance to a class of certification policies can be efficiently calculated Membership equational logic is shown to provide a rich logic for carrying out such calculations, including partiality, for certification. The architecture for a domain-specific certifier is described, followed by an implemented case study. The case study considers consistency of abstract variable attributes in code that performs geometric calculations in Aerospace systems.

  17. [Development of operation patient security detection system].

    PubMed

    Geng, Shu-Qin; Tao, Ren-Hai; Zhao, Chao; Wei, Qun

    2008-11-01

    This paper describes a patient security detection system developed with two dimensional bar codes, wireless communication and removal storage technique. Based on the system, nurses and correlative personnel check code wait operation patient to prevent the defaults. The tests show the system is effective. Its objectivity and currency are more scientific and sophisticated than current traditional method in domestic hospital.

  18. Nurse-Facilitated Health Checks for Persons With Severe Mental Illness: A Cluster-Randomized Controlled Trial.

    PubMed

    White, Jacquie; Lucas, Joanne; Swift, Louise; Barton, Garry R; Johnson, Harriet; Irvine, Lisa; Abotsie, Gabriel; Jones, Martin; Gray, Richard J

    2018-05-01

    This study tested the effectiveness of a nurse-delivered health check with the Health Improvement Profile (HIP), which takes approximately 1.5 hours to complete and code, for persons with severe mental illness. A single-blind, cluster-randomized controlled trial was conducted in England to test whether health checks improved the general medical well-being of persons with severe mental illness at 12-month follow-up. Sixty nurses were randomly assigned to the HIP group or the treatment-as-usual group. From their case lists, 173 patients agreed to participate. HIP group nurses completed health checks for 38 of their 90 patients (42%) at baseline and 22 (24%) at follow-up. No significant between-group differences were noted in patients' general medical well-being at follow-up. Nurses who had volunteered for a clinical trial administered health checks only to a minority of participating patients, suggesting that it may not be feasible to undertake such lengthy structured health checks in routine practice.

  19. Risk of labor dystocia increases with maternal age irrespective of parity: a population-based register study.

    PubMed

    Waldenström, Ulla; Ekéus, Cecilia

    2017-09-01

    Advanced maternal age is associated with labor dystocia (LD) in nulliparous women. This study investigates the age-related risk of LD in first, second and third births. All live singleton cephalic births at term (≥ 37 gestational weeks) recorded in the Swedish Medical Birth Register from 1999 to 2011, except elective cesarean sections and fourth births and more, in total 998 675 pregnancies, were included in the study. LD was defined by International Classification of Diseases, version 10 codes (O620, O621, O622, O629, O630, O631 and O639). In each parity group risks of LD at age 25-29 years, 30-34 years, 35-39 years and ≥ 40 years compared with age < 25 years were investigated by logistic regression analyses. Analyses were adjusted for year of delivery, education, country/region of birth, smoking in early pregnancy, maternal height, body mass index, week of gestation, fetal presentation and infant birthweight. Rates of LD were 22.5%, 6.1% and 4% in first, second and third births, respectively. Adjusted odd ratios (OR) for LD increased progressively from the youngest to the oldest age group, irrespective of parity. At age 35-39 years the adjusted OR (95% CI) was approximately doubled compared with age 25 and younger: 2.13 (2.06-2.20) in first birth; 2.05 (1.91-2.19) in second births; and 1.81 (1.49-2.21) in third births. Maternal age is an independent risk factor for LD in first, second and third births. Although age-related risks by parity are relatively similar, more nulliparous than parous women will be exposed to LD due to the higher rate. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  20. The four-loop six-gluon NMHV ratio function

    DOE PAGES

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N=4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q¯ differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We test the result againstmore » multi-Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. As a result, we also provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  1. The four-loop six-gluon NMHV ratio function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N = 4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q - differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We testmore » the result against multi- Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We also study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. Furthermore, we provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  2. Free-space quantum key distribution at night

    NASA Astrophysics Data System (ADS)

    Buttler, William T.; Hughes, Richard J.; Kwiat, Paul G.; Lamoreaux, Steve K.; Luther, Gabriel G.; Morgan, George L.; Nordholt, Jane E.; Peterson, C. Glen; Simmons, Charles M.

    1998-07-01

    An experimental free-space quantum key distribution (QKD) system has been tested over an outdoor optical path of approximately 1 km under nighttime conditions at Los Alamos National Laboratory. This system employs the Bennett 92 protocol; here we give a brief overview of this protocol, and describe our experimental implementation of it. An analysis of the system efficiency is presented as well as a description of our error detection protocol, which employs a 2D parity check scheme. Finally, the susceptibility of this system to eavesdropping by various techniques is determined, and the effectiveness of privacy amplification procedures is discussed. Our conclusions are that free-space QKD is both effective and secure; possible applications include the rekeying of satellites in low earth orbit.

  3. A novel comparator featured with input data characteristic

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaobo; Ye, Desheng; Xu, Xiangmin; Zheng, Shuai

    2016-03-01

    Two types of low-power asynchronous comparators featured with input data statistical characteristic are proposed in this article. The asynchronous ripple comparator stops comparing at the first unequal bit but delivers the result to the least significant bit. The pre-stop asynchronous comparator can completely stop comparing and obtain results immediately. The proposed and contrastive comparators were implemented in SMIC 0.18 μm process with different bit widths. Simulation shows that the proposed pre-stop asynchronous comparator features the lowest power consumption, shortest average propagation delay and highest area efficiency among the comparators. Data path of low-density parity check decoder using the proposed pre-stop asynchronous comparators are most power efficient compared with other data paths with synthesised, clock gating and bitwise competition logic comparators.

  4. Long-range multi-carrier acoustic communications in shallow water based on iterative sparse channel estimation.

    PubMed

    Kang, Taehyuk; Song, H C; Hodgkiss, W S; Soo Kim, Jea

    2010-12-01

    Long-range orthogonal frequency division multiplexing (OFDM) acoustic communications is demonstrated using data from the Kauai Acomms MURI 2008 (KAM08) experiment carried out in about 106 m deep shallow water west of Kauai, HI, in June 2008. The source bandwidth was 8 kHz (12-20 kHz), and the data were received by a 16-element vertical array at a distance of 8 km. Iterative sparse channel estimation is applied in conjunction with low-density parity-check decoding. In addition, the impact of diversity combining in a highly inhomogeneous underwater environment is investigated. Error-free transmission using 16-quadtrative amplitude modulation is achieved at a data rate of 10 kb/s.

  5. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  6. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  7. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  8. Software Security Knowledge: CWE. Knowing What Could Make Software Vulnerable to Attack

    DTIC Science & Technology

    2011-05-01

    shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1...Buffer • CWE-642: External Control of Critical State Data • CWE-73: External Control of File Name or Path • CWE-426: Untrusted Search Path • CWE...94: Failure to Control Generation of Code (aka ’Code Injection’) • CWE-494: Download of Code Without Integrity Check • CWE-404: Improper Resource

  9. Influence of primary fragment excitation energy and spin distributions on fission observables

    NASA Astrophysics Data System (ADS)

    Litaize, Olivier; Thulliez, Loïc; Serot, Olivier; Chebboubi, Abdelaziz; Tamagno, Pierre

    2018-03-01

    Fission observables in the case of 252Cf(sf) are investigated by exploring several models involved in the excitation energy sharing and spin-parity assignment between primary fission fragments. In a first step the parameters used in the FIFRELIN Monte Carlo code "reference route" are presented: two parameters for the mass dependent temperature ratio law and two constant spin cut-off parameters for light and heavy fragment groups respectively. These parameters determine the initial fragment entry zone in excitation energy and spin-parity (E*, Jπ). They are chosen to reproduce the light and heavy average prompt neutron multiplicities. When these target observables are achieved all other fission observables can be predicted. We show here the influence of input parameters on the saw-tooth curve and we discuss the influence of a mass and energy-dependent spin cut-off model on gamma-rays related fission observables. The part of the model involving level densities, neutron transmission coefficients or photon strength functions remains unchanged.

  10. Efficient eigenvalue determination for arbitrary Pauli products based on generalized spin-spin interactions

    NASA Astrophysics Data System (ADS)

    Leibfried, D.; Wineland, D. J.

    2018-03-01

    Effective spin-spin interactions between ? qubits enable the determination of the eigenvalue of an arbitrary Pauli product of dimension N with a constant, small number of multi-qubit gates that is independent of N and encodes the eigenvalue in the measurement basis states of an extra ancilla qubit. Such interactions are available whenever qubits can be coupled to a shared harmonic oscillator, a situation that can be realized in many physical qubit implementations. For example, suitable interactions have already been realized for up to 14 qubits in ion traps. It should be possible to implement stabilizer codes for quantum error correction with a constant number of multi-qubit gates, in contrast to typical constructions with a number of two-qubit gates that increases as a function of N. The special case of finding the parity of N qubits only requires a small number of operations that is independent of N. This compares favorably to algorithms for computing the parity on conventional machines, which implies a genuine quantum advantage.

  11. 77 FR 33635 - Amendment to the Bank Secrecy Act Regulations-Requirement That Clerks of Court Report Certain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-07

    ... BSA and associated regulations.\\3\\ FinCEN is authorized to impose anti-money laundering (``AML... Code); (iii) Money laundering (as defined in section 1956 or 1957 of title 18 of the United States Code... money in the country in which issued; and (ii) A cashier's check (by whatever name called, including...

  12. Applying Jlint to Space Exploration Software

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Havelund, Klaus

    2004-01-01

    Java is a very successful programming language which is also becoming widespread in embedded systems, where software correctness is critical. Jlint is a simple but highly efficient static analyzer that checks a Java program for several common errors, such as null pointer exceptions, and overflow errors. It also includes checks for multi-threading problems, such as deadlocks and data races. The case study described here shows the effectiveness of Jlint in find-false positives in the multi-threading warnings gives an insight into design patterns commonly used in multi-threaded code. The results show that a few analysis techniques are sufficient to avoid almost all false positives. These techniques include investigating all possible callers and a few code idioms. Verifying the correct application of these patterns is still crucial, because their correct usage is not trivial.

  13. Combined methods of tolerance increasing for embedded SRAM

    NASA Astrophysics Data System (ADS)

    Shchigorev, L. A.; Shagurin, I. I.

    2016-10-01

    The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.

  14. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  15. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons

    PubMed Central

    2014-01-01

    Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532

  16. Data Quality Control and Maintenance for the Qweak Experiment

    NASA Astrophysics Data System (ADS)

    Heiner, Nicholas; Spayde, Damon

    2014-03-01

    The Qweak collaboration seeks to quantify the weak charge of a proton through the analysis of the parity-violating electron asymmetry in elastic electron-proton scattering. The asymmetry is calculated by measuring how many electrons deflect from a hydrogen target at the chosen scattering angle for aligned and anti-aligned electron spins, then evaluating the difference between the numbers of deflections that occurred for both polarization states. The weak charge can then be extracted from this data. Knowing the weak charge will allow us to calculate the electroweak mixing angle for the particular Q2 value of the chosen electrons, which the Standard Model makes a firm prediction for. Any significant deviation from this prediction would be a prime indicator of the existence of physics beyond what the Standard Model describes. After the experiment was conducted at Jefferson Lab, collected data was stored within a MySQL database for further analysis. I will present an overview of the database and its functions as well as a demonstration of the quality checks and maintenance performed on the data itself. These checks include an analysis of errors occurring throughout the experiment, specifically data acquisition errors within the main detector array, and an analysis of data cuts.

  17. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  18. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    NASA Astrophysics Data System (ADS)

    Liebovitch, Larry

    1998-03-01

    The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.

  19. Toroidal modeling of the n = 1 intrinsic error field correction experiments in EAST

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Liu, Yueqiang; Sun, Youwen; Wang, Huihui; Gu, Shuai; Jia, Manni; Li, Li; Liu, Yue; Wang, Zhirui; Zhou, Lina

    2018-05-01

    The m/n = 2/1 resonant vacuum error field (EF) in the EAST tokamak experiments, inferred from the compass coil current amplitude and phase scan for mode locking, was found to depend on the parity between the upper and lower rows of the EF correction (EFC) coils (Wang et al 2016 Nucl. Fusion 56 066011). Here m and n are the poloidal and toroidal harmonic numbers in a torus, respectively. This experimental observation implies that the compass scan results cannot be simply interpreted as reflecting the true intrinsic EF. This work aims at understanding this puzzle, based on toroidal modeling of the EFC plasma discharge in EAST using the MARS-F code (Liu et al 2000 Phys. Plasmas 7 3681). By varying the amplitude and phase of the assumed n = 1 intrinsic vacuum EF with different poloidal spectra, and by computing the plasma response to the assumed EF, the compass scan predicted 2/1 EF, based on minimizing the computed resonant electromagnetic torque, can be made to match well with that of the EFC experiments using both even and odd parity coils. Moreover, the compass scan predicted vacuum EFs are found to be significantly differing from the true intrinsic EF used as input to the MARS-F code. While the puzzling result remains to be fully resolved, the results from this study offer an improved understanding of the EFC experiments and the compass scan technique for determining the intrinsic resonant EF.

  20. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  1. FSCATT: Angular Dependence and Filter Options.

    DTIC Science & Technology

    The input routines to the code have been completely rewritten to allow for a free-form input format. The input routines now provide self-consistency checks and diagnostics for the user’s edification .

  2. Read-Write-Codes: An Erasure Resilient Encoding System for Flexible Reading and Writing in Storage Networks

    NASA Astrophysics Data System (ADS)

    Mense, Mario; Schindelhauer, Christian

    We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.

  3. Lake water quality mapping from Landsat

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.

    1977-01-01

    In the project described remote sensing was used to check the quality of lake waters. The lakes of three Landsat scenes were mapped with the Bendix MDAS multispectral analysis system. From the MDAS color coded maps, the lake with the worst algae problem was easily located. The lake was closely checked, and the presence of 100 cows in the springs which fed the lake could be identified as the pollution source. The laboratory and field work involved in the lake classification project is described.

  4. Specification Improvement Through Analysis of Proof Structure (SITAPS): High Assurance Software Development

    DTIC Science & Technology

    2016-02-01

    from the tools being used. For example, while Coq proves properties it does not dump an explanation of the proofs in any currently supported form. The...Distribution Unlimited 5 Hotel room locks and card keys use a simple protocol to manage the transition of rooms from one guest to the next. The lock...retains that guest key’s code. A new guest checks in and gets a card with a new current code, and the previous code set to the previous guest’s current

  5. System description: IVY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCune, W.; Shumsky, O.

    2000-02-04

    IVY is a verified theorem prover for first-order logic with equality. It is coded in ACL2, and it makes calls to the theorem prover Otter to search for proofs and to the program MACE to search for countermodels. Verifications of Otter and MACE are not practical because they are coded in C. Instead, Otter and MACE give detailed proofs and models that are checked by verified ACL2 programs. In addition, the initial conversion to clause form is done by verified ACL2 code. The verification is done with respect to finite interpretations.

  6. Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.

    PubMed

    Henry, R; Tiselj, I; Snoj, L

    2015-03-01

    New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Penn, John

    2014-01-01

    This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.

  8. An analytical benchmark and a Mathematica program for MD codes: Testing LAMMPS on the 2nd generation Brenner potential

    NASA Astrophysics Data System (ADS)

    Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.

    2016-10-01

    An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.

  9. Gender bias in scholarly peer review.

    PubMed

    Helmer, Markus; Schottdorf, Manuel; Neef, Andreas; Battaglia, Demian

    2017-03-21

    Peer review is the cornerstone of scholarly publishing and it is essential that peer reviewers are appointed on the basis of their expertise alone. However, it is difficult to check for any bias in the peer-review process because the identity of peer reviewers generally remains confidential. Here, using public information about the identities of 9000 editors and 43000 reviewers from the Frontiers series of journals, we show that women are underrepresented in the peer-review process, that editors of both genders operate with substantial same-gender preference (homophily), and that the mechanisms of this homophily are gender-dependent. We also show that homophily will persist even if numerical parity between genders is reached, highlighting the need for increased efforts to combat subtler forms of gender bias in scholarly publishing.

  10. C -P -T anomaly matching in bosonic quantum field theory and spin chains

    NASA Astrophysics Data System (ADS)

    Sulejmanpasic, Tin; Tanizaki, Yuya

    2018-04-01

    We consider the O (3 ) nonlinear sigma model with the θ term and its linear counterpart in 1+1D. The model has discrete time-reflection and space-reflection symmetries at any θ , and enjoys the periodicity in θ →θ +2 π . At θ =0 ,π it also has a charge-conjugation C symmetry. Gauging the discrete space-time reflection symmetries is interpreted as putting the theory on the nonorientable R P2 manifold, after which the 2 π periodicity of θ and the C symmetry at θ =π are lost. We interpret this observation as a mixed 't Hooft anomaly among charge-conjugation C , parity P , and time-reversal T symmetries when θ =π . Anomaly matching implies that in this case the ground state cannot be trivially gapped, as long as C ,P , and T are all good symmetries of the theory. We make several consistency checks with various semiclassical regimes, and with the exactly solvable XYZ model. We interpret this anomaly as an anomaly of the corresponding spin-half chains with translational symmetry, parity, and time reversal [but not involving the SO(3)-spin symmetry], requiring that the ground state is never trivially gapped, even if SO(3) spin symmetry is explicitly and completely broken. We also consider generalizations to C PN -1 models and show that the C -P -T anomaly exists for even N .

  11. The Problem of Modeling the Elastomechanics in Engineering

    DTIC Science & Technology

    1990-02-01

    element method by the code PROBE (McNeil Schwendler- Noetic ) and STRIPE (Aeronautical Institute of Sweden). These codes have various error checks so that...Mindlin solutions converge to the Kirchhoff solution as d--O, see eg. [12), [19]. For a detailed study of the asymptotic behavior of Reissner...of study and research for foreign students in numerical mathematics who are supported by foreign govern- ments or exchange agencies (Fulbright, etc

  12. Automated Diversity in Computer Systems

    DTIC Science & Technology

    2005-09-01

    traces that started with trace heads , namely backwards- taken branches. These branches are indicative of loops within the program, and Dynamo assumes that...would be the ones the program would normally take. Therefore when a trace head became hot (was visited enough times), only a single code trace would...all encountered trace heads . When an interesting instruction is being emulated, the tracing code checks to see if it has been encountered before

  13. The MCUCN simulation code for ultracold neutron physics

    NASA Astrophysics Data System (ADS)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, J; Pompos, A; Jiang, S

    Purpose: To put forth an innovative clinical paradigm for weekly chart checking so that treatment status is periodically checked accurately and efficiently. This study also aims to help optimize the chart checking clinical workflow in a busy radiation therapy clinic. Methods: It is mandated by the Texas Administrative code to check patient charts of radiation therapy once a week or every five fractions, however it varies drastically among institutions in terms of when and how it is done. Some do it every day, but a lot of efforts are wasted on opening ineligible charts; some do it on a fixedmore » day but the distribution of intervals between subsequent checks is not optimal. To establish an optimal chart checking procedure, a new paradigm was developed to achieve 1) charts are checked more accurately and more efficiently; 2) charts are checked on optimal days without any miss; 3) workload is evened out throughout a week when multiple physicists are involved. All active charts will be accessed by querying the R&V system. Priority is assigned to each chart based on the number of days before the next due date followed by sorting and workload distribution steps. New charts are also taken into account when distributing the workload so it is reasonably even throughout the week. Results: Our clinical workflow became more streamlined and smooth. In addition, charts get checked in a more timely fashion so that errors would get caught earlier should they occur. Conclusion: We developed a new weekly chart checking diagram. It helps physicists check charts in a timely manner, saves their time in busy clinics, and consequently reduces possible errors.« less

  15. Reduced circuit implementation of encoder and syndrome generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trager, Barry M; Winograd, Shmuel

    An error correction method and system includes an Encoder and Syndrome-generator that operate in parallel to reduce the amount of circuitry used to compute check symbols and syndromes for error correcting codes. The system and method computes the contributions to the syndromes and check symbols 1 bit at a time instead of 1 symbol at a time. As a result, the even syndromes can be computed as powers of the odd syndromes. Further, the system assigns symbol addresses so that there are, for an example GF(2.sup.8) which has 72 symbols, three (3) blocks of addresses which differ by a cubemore » root of unity to allow the data symbols to be combined for reducing size and complexity of odd syndrome circuits. Further, the implementation circuit for generating check symbols is derived from syndrome circuit using the inverse of the part of the syndrome matrix for check locations.« less

  16. Utilization of maternal healthcare among adolescent mothers in urban India: evidence from DLHS-3.

    PubMed

    Singh, Aditya; Kumar, Abhishek; Pranjali, Pragya

    2014-01-01

    Background. Low use of maternal healthcare services is one of the reasons why maternal mortality is still considerably high among adolescents mothers in India. To increase the utilization of these services, it is necessary to identify factors that affect service utilization. To our knowledge, no national level study in India has yet examined the issue in the context urban adolescent mothers. The present study is an attempt to fill this gap. Data and Methods. Using information from the third wave of District Level Household Survey (2007-08), we have examined factors associated with the utilization of maternal healthcare services among urban Indian married adolescent women (aged 13-19 years) who have given live/still births during last three years preceding the survey. The three outcome variables included in the analyses are 'full antenatal care (ANC)', 'safe delivery' and 'postnatal care within 42 days of delivery'. We have used Chi-square test to determine the difference in proportion and the binary logistic regression to understand the net effect of predictor variables on the utilization of maternity care. Results. About 22.9% of mothers have received full ANC, 65.1% of mothers have had at least one postnatal check-up within 42 days of pregnancy. The proportion of mother having a safe delivery, i.e., assisted by skilled personnel, is about 70.5%. Findings indicate that there is considerable amount of variation in use of maternity care by educational attainment, household wealth, religion, parity and region of residence. Receiving full antenatal care is significantly associated with mother's education, religion, caste, household wealth, parity, exposure to healthcare messages and region of residence. Mother's education, full antenatal care, parity, household wealth, religion and region of residence are also statistically significant in case of safe delivery. The use of postnatal care is associated with household wealth, woman's education, full antenatal care, safe delivery care and region of residence. Conclusion. Several socioeconomic and demographic factors affect the utilization of maternal healthcare services among urban adolescent women in India. Promoting the use of family planning, female education and higher age at marriage, targeting vulnerable groups such as poor, illiterate, high parity women, involving media and grass root level workers and collaboration between community leaders and health care system could be some important policy level interventions to address the unmet need of maternity services among urban adolescents.

  17. Joint Carrier-Phase Synchronization and LDPC Decoding

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban

    2009-01-01

    A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine and sine of the carrier phase cc as well as correlated noise components. These products would be fed as inputs to a digital PLL that would include a number-controlled oscillator (NCO), which provides an estimate of the carrier phase, theta(sub c).

  18. Design of parity generator and checker circuit using electro-optic effect of Mach-Zehnder interferometers

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh; Chanderkanta; Amphawan, Angela

    2016-04-01

    Parity is an extra bit which is used to add in digital information to detect error at the receiver end. It can be even and odd parity. In case of even parity, the number of one's will be even included the parity and reverse in the case of odd parity. The circuit which is used to generate the parity at the transmitter side, called the parity generator and the circuit which is used to detect the parity at receiver side is called as parity checker. In this paper, an even and odd parity generator and checker circuits are designed using electro-optic effect inside lithium niobate based Mach-Zehnder Interferometers (MZIs). The MZIs structures collectively show powerful capability in switching an input optical signal to a desired output port from a collection of output ports. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).

  19. Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations

    NASA Astrophysics Data System (ADS)

    Hopper, Seth

    2018-03-01

    Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.

  20. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  1. Within-farm variability in age structure of breeding-female pigs and reproductive performance on commercial swine breeding farms.

    PubMed

    Koketsu, Yuzo

    2005-03-15

    This study investigated relationships between herd age structure and herd productivity in breeding herds; it also investigated a pattern in parity proportions of females over 2 years and its relationship with herd productivity in commercial swine herds. This study was based on data from 148 commercial farms in North America stored in the swine database program at the University of Minnesota. The primary selection criterion was fluctuations in breeding-female pig (female) inventories over a 2-year interval. Productivity measurements and parity proportions of females were extracted from the database. A 24-month time-plot in proportions of Parity 0 and Parities 3-5 females (mid-parity) was charted for each farm. Using these charts, a change in proportions of Parity 0 and mid-parity for each farm was categorized into patterns: FLUCTUATE (Parity 0 and mid-parity proportion lines crossed) or STABLE (the two proportion lines never crossed). Higher proportions of mid-parity sows were correlated with greater pigs weaned per female per year (PWFY; P < 0.01). Farms with a FLUCTUATE (73% of the 148 farms) pattern had lower PWFY than those with a STABLE pattern (P < 0.01). The STABLE farms had higher proportions of mid-parity sows, higher parity at culling, higher frequency of gilt deliveries per year, and lower replacement rate than the FLUCTUATE farms (P < 0.01). In conclusion, maintaining stable subpopulations with mid-parity and Parity 0 are recommended to optimize herd productivity.

  2. My baby, my move: examination of perceived barriers and motivating factors related to antenatal physical activity.

    PubMed

    Leiferman, Jenn; Swibas, Tracy; Koiness, Kacey; Marshall, Julie A; Dunn, Andrea L

    2011-01-01

    Based on a socioecological model, the present study examined multilevel barriers and facilitators related to physical activity engagement during pregnancy in women of low socioeconomic status. Individual and paired interviews were conducted with 25 pregnant women (aged 18-46 years, 17-40 weeks' gestation) to ask about motivational factors and to compare differences in activity level and parity. Atlas/Ti software was used to code verbatim interview transcripts by organizing codes into categories that reflect symbolic domains of meaning, relational patterns, and overarching themes. Perceived barriers and motivating factors differed between exercisers and nonexercisers at intrapersonal, interpersonal, and environmental levels. Future interventions should take into account key motivating multilevel factors and barriers to tailor more meaningful advice for pregnant women. © 2011 by the American College of Nurse-Midwives.

  3. A new relativistic viscous hydrodynamics code and its application to the Kelvin-Helmholtz instability in high-energy heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-01

    We construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. We check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken's flow and the Israel-Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin-Helmholtz instability in high-energy heavy-ion collisions.

  4. Sierra/Aria 4.48 Verification Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal Fluid Development Team

    Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.

  5. Sensor Authentication: Embedded Processor Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svoboda, John

    2012-09-25

    Described is the c code running on the embedded Microchip 32bit PIC32MX575F256H located on the INL developed noise analysis circuit board. The code performs the following functions: Controls the noise analysis circuit board preamplifier voltage gains of 1, 10, 100, 000 Initializes the analog to digital conversion hardware, input channel selection, Fast Fourier Transform (FFT) function, USB communications interface, and internal memory allocations Initiates high resolution 4096 point 200 kHz data acquisition Computes complex 2048 point FFT and FFT magnitude. Services Host command set Transfers raw data to Host Transfers FFT result to host Communication error checking

  6. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  7. A Discussion of Issues in Integrity Constraint Monitoring

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.; Gates, Ann Q.; Cooke, Daniel E.

    1998-01-01

    In the development of large-scale software systems, analysts, designers, and programmers identify properties of data objects in the system. The ability to check those assertions during runtime is desirable as a means of verifying the integrity of the program. Typically, programmers ensure the satisfaction of such properties through the use of some form of manually embedded assertion check. The disadvantage to this approach is that these assertions become entangled within the program code. The goal of the research is to develop an integrity constraint monitoring mechanism whereby a repository of software system properties (called integrity constraints) are automatically inserted into the program by the mechanism to check for incorrect program behaviors. Such a mechanism would overcome many of the deficiencies of manually embedded assertion checks. This paper gives an overview of the preliminary work performed toward this goal. The manual instrumentation of constraint checking on a series of test programs is discussed, This review then is used as the basis for a discussion of issues to be considered in developing an automated integrity constraint monitor.

  8. Reducing software security risk through an integrated approach

    NASA Technical Reports Server (NTRS)

    Gilliam, D.; Powell, J.; Kelly, J.; Bishop, M.

    2001-01-01

    The fourth quarter delivery, FY'01 for this RTOP is a Property-Based Testing (PBT), 'Tester's Assistant' (TA). The TA tool is to be used to check compiled and pre-compiled code for potential security weaknesses that could be exploited by hackers. The TA Instrumenter, implemented mostly in C++ (with a small part in Java), parsels two types of files: Java and TASPEC. Security properties to be checked are written in TASPEC. The Instrumenter is used in conjunction with the Tester's Assistant Specification (TASpec)execution monitor to verify the security properties of a given program.

  9. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  10. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  11. Concurrent hypercube system with improved message passing

    NASA Technical Reports Server (NTRS)

    Peterson, John C. (Inventor); Tuazon, Jesus O. (Inventor); Lieberman, Don (Inventor); Pniel, Moshe (Inventor)

    1989-01-01

    A network of microprocessors, or nodes, are interconnected in an n-dimensional cube having bidirectional communication links along the edges of the n-dimensional cube. Each node's processor network includes an I/O subprocessor dedicated to controlling communication of message packets along a bidirectional communication link with each end thereof terminating at an I/O controlled transceiver. Transmit data lines are directly connected from a local FIFO through each node's communication link transceiver. Status and control signals from the neighboring nodes are delivered over supervisory lines to inform the local node that the neighbor node's FIFO is empty and the bidirectional link between the two nodes is idle for data communication. A clocking line between neighbors, clocks a message into an empty FIFO at a neighbor's node and vica versa. Either neighbor may acquire control over the bidirectional communication link at any time, and thus each node has circuitry for checking whether or not the communication link is busy or idle, and whether or not the receive FIFO is empty. Likewise, each node can empty its own FIFO and in turn deliver a status signal to a neighboring node indicating that the local FIFO is empty. The system includes features of automatic message rerouting, block message transfer and automatic parity checking and generation.

  12. Parity & Untreated Dental Caries in US Women

    PubMed Central

    Russell, S.L.; Ickovics, J.R.; Yaffee, R.A.

    2010-01-01

    While parity (number of children) reportedly is related to tooth loss, the relationship between parity and dental caries has not been extensively investigated. We used path analysis to test a theoretical model that specified that parity influences dental caries levels through dental care, psycho- social factors, and dental health damaging behaviors in 2635 women selected from the NHANES III dataset. We found that while increased parity was not associated with a greater level of total caries (DFS), parity was related to untreated dental caries (DS). The mechanisms by which parity is related to caries, however, remain undefined. Further investigation is warranted to determine if disparities in dental caries among women are due to differences in parity and the likely changes that parallel these reproductive choices. PMID:20631092

  13. Parity & untreated dental caries in US women.

    PubMed

    Russell, S L; Ickovics, J R; Yaffee, R A

    2010-10-01

    While parity (number of children) reportedly is related to tooth loss, the relationship between parity and dental caries has not been extensively investigated. We used path analysis to test a theoretical model that specified that parity influences dental caries levels through dental care, psycho- social factors, and dental health damaging behaviors in 2635 women selected from the NHANES III dataset. We found that while increased parity was not associated with a greater level of total caries (DFS), parity was related to untreated dental caries (DS). The mechanisms by which parity is related to caries, however, remain undefined. Further investigation is warranted to determine if disparities in dental caries among women are due to differences in parity and the likely changes that parallel these reproductive choices.

  14. Separation of the 1+ /1- parity doublet in 20Ne

    NASA Astrophysics Data System (ADS)

    Beller, J.; Stumpf, C.; Scheck, M.; Pietralla, N.; Deleanu, D.; Filipescu, D. M.; Glodariu, T.; Haxton, W.; Idini, A.; Kelley, J. H.; Kwan, E.; Martinez-Pinedo, G.; Raut, R.; Romig, C.; Roth, R.; Rusev, G.; Savran, D.; Tonchev, A. P.; Tornow, W.; Wagner, J.; Weller, H. R.; Zamfir, N.-V.; Zweidinger, M.

    2015-02-01

    The (J , T) = (1 , 1) parity doublet in 20Ne at 11.26 MeV is a good candidate to study parity violation in nuclei. However, its energy splitting is known with insufficient accuracy for quantitative estimates of parity violating effects. To improve on this unsatisfactory situation, nuclear resonance fluorescence experiments using linearly and circularly polarized γ-ray beams were used to determine the energy difference of the parity doublet ΔE = E (1-) - E (1+) = - 3.2(± 0.7) stat(-1.2+0.6)sys keV and the ratio of their integrated cross sections Is,0(+) /Is,0(-) = 29(± 3) stat(-7+14)sys. Shell-model calculations predict a parity-violating matrix element having a value in the range 0.46-0.83 eV for the parity doublet. The small energy difference of the parity doublet makes 20Ne an excellent candidate to study parity violation in nuclear excitations.

  15. Gender bias in scholarly peer review

    PubMed Central

    Helmer, Markus; Schottdorf, Manuel; Neef, Andreas; Battaglia, Demian

    2017-01-01

    Peer review is the cornerstone of scholarly publishing and it is essential that peer reviewers are appointed on the basis of their expertise alone. However, it is difficult to check for any bias in the peer-review process because the identity of peer reviewers generally remains confidential. Here, using public information about the identities of 9000 editors and 43000 reviewers from the Frontiers series of journals, we show that women are underrepresented in the peer-review process, that editors of both genders operate with substantial same-gender preference (homophily), and that the mechanisms of this homophily are gender-dependent. We also show that homophily will persist even if numerical parity between genders is reached, highlighting the need for increased efforts to combat subtler forms of gender bias in scholarly publishing. DOI: http://dx.doi.org/10.7554/eLife.21718.001 PMID:28322725

  16. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  17. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  18. Proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification

    NASA Technical Reports Server (NTRS)

    Ewen, Denney, W. (Editor); Jensen, Thomas (Editor)

    2009-01-01

    This NASA conference publication contains the proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification, held as part of LICS in Los Angeles, CA, USA, on August 15, 2009. Software certification demonstrates the reliability, safety, or security of software systems in such a way that it can be checked by an independent authority with minimal trust in the techniques and tools used in the certification process itself. It can build on existing validation and verification (V&V) techniques but introduces the notion of explicit software certificates, Vvilich contain all the information necessary for an independent assessment of the demonstrated properties. One such example is proof-carrying code (PCC) which is an important and distinctive approach to enhancing trust in programs. It provides a practical framework for independent assurance of program behavior; especially where source code is not available, or the code author and user are unknown to each other. The workshop wiII address theoretical foundations of logic-based software certification as well as practical examples and work on alternative application domains. Here "certificate" is construed broadly, to include not just mathematical derivations and proofs but also safety and assurance cases, or any fonnal evidence that supports the semantic analysis of programs: that is, evidence about an intrinsic property of code and its behaviour that can be independently checked by any user, intermediary, or third party. These guarantees mean that software certificates raise trust in the code itself, distinct from and complementary to any existing trust in the creator of the code, the process used to produce it, or its distributor. In addition to the contributed talks, the workshop featured two invited talks, by Kelly Hayhurst and Andrew Appel. The PCC 2009 website can be found at http://ti.arc.nasa.gov /event/pcc 091.

  19. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  20. Analysis of error-correction constraints in an optical disk

    NASA Astrophysics Data System (ADS)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  1. Data Race Benchmark Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Chunhua; Lin, Pei-Hung; Asplund, Joshua

    2017-03-21

    This project is a benchmark suite of Open-MP parallel codes that have been checked for data races. The programs are marked to show which do and do not have races. This allows them to be leveraged while testing and developing race detection tools.

  2. State Parity Laws and Access to Treatment for Substance Use Disorder in the United States: Implications for Federal Parity Legislation

    PubMed Central

    Wen, Hefei; Cummings, Janet R.; Hockenberry, Jason M.; Gaydos, Laura M.; Druss, Benjamin G.

    2014-01-01

    Context The passage of the 2008 Mental Health Parity and Addiction Equity Act (MHPAEA) and the 2010 Affordable Care Act (ACA) incorporated parity for substance use disorder (SUD) into federal legislation. Yet prior research provides us with scant evidence as to whether federal parity legislation will hold the potential for improving access to SUD treatment. Objective This study examined the effect of state-level SUD parity laws on state-aggregate SUD treatment rates from 2000 to 2008, to shed light on the impact of the recent federal-level SUD parity legislation. Design A quasi-experimental study design using a two-way (state and year) fixed-effect method Setting and Participants All known specialty SUD treatment facilities in the United States Interventions State-level SUD parity laws between 2000 and 2008 Main Outcome Measures State-aggregate SUD treatment rates in: (1) all specialty SUD treatment facilities, and (2) specialty SUD treatment facilities accepting private insurance Results The implementation of any SUD parity law increased the treatment rate by 9 percent (p<0.01) in all specialty SUD treatment facilities and by 15 percent (p<0.05) in facilities accepting private insurance. Full parity and parity-if-offered (i.e., parity only if SUD coverage is offered) increased SUD treatment rate by 13 percent (p<0.05) and 8 percent (p<0.05) in all facilities, and by 21 percent (p<0.05) and 10 percent (p<0.05) in those accepting private insurance. Conclusions We found a positive effect of the implementation of state SUD parity legislation on access to specialty SUD treatment. Furthermore, the positive association was more pronounced in states with more comprehensive parity laws. Our findings suggest that federal parity legislation holds the potential to improve access to SUD treatment. PMID:24154931

  3. Seeking more Opportunities of Check Dams' harmony with nearby Circumstances via Design Thinking Process

    NASA Astrophysics Data System (ADS)

    Lin, Huan-Chun; Chen, Su-Chin; Tsai, Chen-Chen

    2014-05-01

    The contents of engineering design should indeed contain both science and art fields. However, the art aspect is too less discussed to cause an inharmonic impact with natural surroundings, and so are check dams. This study would like to seek more opportunities of check dams' harmony with nearby circumstances. According to literatures review of philosophy and cognition science fields, we suggest a thinking process of three phases to do check dams design work for reference. The first phase, conceptualization, is to list critical problems, such as the characteristics of erosion or deposition, and translate them into some goal situations. The second phase, transformation, is to use cognition methods such as analogy, association and metaphors to shape an image and prototypes. The third phase, formation, is to decide the details of the construction, such as stable safety analysis of shapes or materials. According to the previous descriptions, Taiwan's technological codes or papers about check dam design mostly emphasize the first and third phases, still quite a few lacks of the second phase. We emphases designers shouldn't ignore any phase of the framework especially the second one, or they may miss some chances to find more suitable solutions. Otherwise, this conceptual framework is simple to apply and we suppose it's a useful tool to design a more harmonic check dam with nearby natural landscape. Key Words: check dams, design thinking process, conceptualization, transformation, formation.

  4. Propel: Tools and Methods for Practical Source Code Model Checking

    NASA Technical Reports Server (NTRS)

    Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem

    2003-01-01

    The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.

  5. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  6. Cubic interactions of massless bosonic fields in three dimensions. II. Parity-odd and Chern-Simons vertices

    NASA Astrophysics Data System (ADS)

    Kessel, Pan; Mkrtchyan, Karapet

    2018-05-01

    This work completes the classification of the cubic vertices for arbitrary-spin massless bosons in three dimensions started in a previous companion paper by constructing parity-odd vertices. Similarly to the parity-even case, there is a unique parity-odd vertex for any given triple s1≥s2≥s3≥2 of massless bosons if the triangle inequalities are satisfied (s1

  7. A new relativistic viscous hydrodynamics code and its application to the Kelvin–Helmholtz instability in high-energy heavy-ion collisions

    DOE PAGES

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-09

    Here, we construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We also split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. Furthemore, we check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken’s flow and the Israel–Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin–Helmholtz instability inmore » high-energy heavy-ion collisions.« less

  8. Numerical study of supersonic combustors by multi-block grids with mismatched interfaces

    NASA Technical Reports Server (NTRS)

    Moon, Young J.

    1990-01-01

    A three dimensional, finite rate chemistry, Navier-Stokes code was extended to a multi-block code with mismatched interface for practical calculations of supersonic combustors. To ensure global conservation, a conservative algorithm was used for the treatment of mismatched interfaces. The extended code was checked against one test case, i.e., a generic supersonic combustor with transverse fuel injection, examining solution accuracy, convergence, and local mass flux error. After testing, the code was used to simulate the chemically reacting flow fields in a scramjet combustor with parallel fuel injectors (unswept and swept ramps). Computational results were compared with experimental shadowgraph and pressure measurements. Fuel-air mixing characteristics of the unswept and swept ramps were compared and investigated.

  9. User input verification and test driven development in the NJOY21 nuclear data processing code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia Jo; Conlin, Jeremy Lloyd; McCartney, Austin Paul

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, andmore » capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.« less

  10. Evaluation of a bar-code system to detect unaccompanied baggage

    DOT National Transportation Integrated Search

    1988-02-01

    The objective of the Unaccompanied Baggage Detection System (UBDS) Project has : been to gain field experience with a system designed to identify passengers who : check baggage for a flight and subsequently fail to board that flight. In the first : p...

  11. Simultaneous message framing and error detection

    NASA Technical Reports Server (NTRS)

    Frey, A. H., Jr.

    1968-01-01

    Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.

  12. Telemetry Standards, RCC Standard 106-17, Chapter 4, Pulse Code Modulation Standards

    DTIC Science & Technology

    2017-07-01

    Frame Structure .............................................................................................. 4-6 4.3.3 Cyclic Redundancy Check (Class...Spectral and BEP Comparisons for NRZ and Bi-phase............................................ A-3 A.4. PCM Frame Structure Examples...4-4 Figure 4-3. PCM Frame Structure .......................................................................................... 4-6

  13. 40 CFR 86.005-17 - On-board diagnostics.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...

  14. 40 CFR 86.005-17 - On-board diagnostics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...

  15. Single event upset protection circuit and method

    DOEpatents

    Wallner, John; Gorder, Michael

    2016-03-22

    An SEU protection circuit comprises first and second storage means for receiving primary and redundant versions, respectively, of an n-bit wide data value that is to be corrected in case of an SEU occurrence; the correction circuit requires that the data value be a 1-hot encoded value. A parity engine performs a parity operation on the n bits of the primary data value. A multiplexer receives the primary and redundant data values and the parity engine output at respective inputs, and is arranged to pass the primary data value to an output when the parity engine output indicates `odd` parity, and to pass the redundant data value to the output when the parity engine output indicates `even` parity. The primary and redundant data values are suitably state variables, and the parity engine is preferably an n-bit wide XOR or XNOR gate.

  16. RAINIER: A simulation tool for distributions of excited nuclear states and cascade fluctuations

    NASA Astrophysics Data System (ADS)

    Kirsch, L. E.; Bernstein, L. A.

    2018-06-01

    A new code has been developed named RAINIER that simulates the γ-ray decay of discrete and quasi-continuum nuclear levels for a user-specified range of energy, angular momentum, and parity including a realistic treatment of level spacing and transition width fluctuations. A similar program, DICEBOX, uses the Monte Carlo method to simulate level and width fluctuations but is restricted in its initial level population algorithm. On the other hand, modern reaction codes such as TALYS and EMPIRE populate a wide range of states in the residual nucleus prior to γ-ray decay, but do not go beyond the use of deterministic functions and therefore neglect cascade fluctuations. This combination of capabilities allows RAINIER to be used to determine quasi-continuum properties through comparison with experimental data. Several examples are given that demonstrate how cascade fluctuations influence experimental high-resolution γ-ray spectra from reactions that populate a wide range of initial states.

  17. Three-dimensional structural analysis using interactive graphics

    NASA Technical Reports Server (NTRS)

    Biffle, J.; Sumlin, H. A.

    1975-01-01

    The application of computer interactive graphics to three-dimensional structural analysis was described, with emphasis on the following aspects: (1) structural analysis, and (2) generation and checking of input data and examination of the large volume of output data (stresses, displacements, velocities, accelerations). Handling of three-dimensional input processing with a special MESH3D computer program was explained. Similarly, a special code PLTZ may be used to perform all the needed tasks for output processing from a finite element code. Examples were illustrated.

  18. Dynamic Detection of Malicious Code in COTS Software

    DTIC Science & Technology

    2000-04-01

    run the following documented hostile applets or ActiveX of these tools work only on mobile code (Java, ActiveX , controls: 16-11 Hostile Applets Tiny...Killer App Exploder Runner ActiveX Check Spy eSafe Protect Desktop 9/9 blocked NB B NB 13/17 blocked NB Surfinshield Online 9/9 blocked NB B B 13/17...Exploder is an ActiveX control top (@). that performs a clean shutdown of your computer. The interface is attractive, although rather complex, as McLain’s

  19. On the RAC. CIOs must work to make their organization's time with CMS Recovery Audit Contractors as painless as possible.

    PubMed

    Lawrence, Daphne

    2009-01-01

    CIOs should act as a team with HIM and Finance to prepare for RAC audits. CIOs can take the lead in looking at improved coding systems, and can be involved in creating policies and procedures for the hospital's RAC team. RAC is an opportunity to improve documentation, coding and data analysis. RAC appeals will become more common as states share lessons learned. Follow the money and check on claims that are frequently returned.

  20. C code generation from Petri-net-based logic controller specification

    NASA Astrophysics Data System (ADS)

    Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei

    2017-08-01

    The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.

  1. Operative team communication during simulated emergencies: Too busy to respond?

    PubMed

    Davis, W Austin; Jones, Seth; Crowell-Kuhnberg, Adrianna M; O'Keeffe, Dara; Boyle, Kelly M; Klainer, Suzanne B; Smink, Douglas S; Yule, Steven

    2017-05-01

    Ineffective communication among members of a multidisciplinary team is associated with operative error and failure to rescue. We sought to measure operative team communication in a simulated emergency using an established communication framework called "closed loop communication." We hypothesized that communication directed at a specific recipient would be more likely to elicit a check back or closed loop response and that this relationship would vary with changes in patients' clinical status. We used the closed loop communication framework to code retrospectively the communication behavior of 7 operative teams (each comprising 2 surgeons, anesthesiologists, and nurses) during response to a simulated, postanesthesia care unit "code blue." We identified call outs, check backs, and closed loop episodes and applied descriptive statistics and a mixed-effects negative binomial regression to describe characteristics of communication in individuals and in different specialties. We coded a total of 662 call outs. The frequency and type of initiation and receipt of communication events varied between clinical specialties (P < .001). Surgeons and nurses initiated fewer and received more communication events than anesthesiologists. For the average participant, directed communication increased the likelihood of check back by at least 50% (P = .021) in periods preceding acute changes in the clinical setting, and exerted no significant effect in periods after acute changes in the clinical situation. Communication patterns vary by specialty during a simulated operative emergency, and the effect of directed communication in eliciting a response depends on the clinical status of the patient. Operative training programs should emphasize the importance of quality communication in the period immediately after an acute change in the clinical setting of a patient and recognize that communication patterns and needs vary between members of multidisciplinary operative teams. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. State parity laws and access to treatment for substance use disorder in the United States: implications for federal parity legislation.

    PubMed

    Wen, Hefei; Cummings, Janet R; Hockenberry, Jason M; Gaydos, Laura M; Druss, Benjamin G

    2013-12-01

    The passage of the 2008 Mental Health Parity and Addiction Equity Act and the 2010 Affordable Care Act incorporated parity for substance use disorder (SUD) treatment into federal legislation. However, prior research provides us with scant evidence as to whether federal parity legislation will hold the potential for improving access to SUD treatment. To examine the effect of state-level SUD parity laws on state-aggregate SUD treatment rates and to shed light on the impact of the recent federal SUD parity legislation. We conducted a quasi-experimental study using a 2-way (state and year) fixed-effect method. We included all known specialty SUD treatment facilities in the United States and examined treatment rates from October 1, 2000, through March 31, 2008. Our main source of data was the National Survey of Substance Abuse Treatment Services, which provides facility-level information on specialty SUD treatment. State-level SUD parity laws during the study period. State-aggregate SUD treatment rates in (1) all specialty SUD treatment facilities and (2) specialty SUD treatment facilities accepting private insurance. The implementation of any SUD parity law increased the treatment rate by 9% (P < .001) in all specialty SUD treatment facilities and by 15% (P = .02) in facilities accepting private insurance. Full parity and parity only if SUD coverage is offered increased the SUD treatment rate by 13% (P = .02) and 8% (P = .04), respectively, in all facilities and by 21% (P = .03) and 10% (P = .04), respectively, in facilities accepting private insurance. We found a positive effect of the implementation of state SUD parity legislation on access to specialty SUD treatment. Furthermore, the positive association is more pronounced in states with more comprehensive parity laws. Our findings suggest that federal parity legislation holds the potential to improve access to SUD treatment.

  3. Higher gravidity and parity are associated with increased prevalence of metabolic syndrome among rural Bangladeshi women.

    PubMed

    Akter, Shamima; Jesmin, Subrina; Rahman, Md Mizanur; Islam, Md Majedul; Khatun, Most Tanzila; Yamaguchi, Naoto; Akashi, Hidechika; Mizutani, Taro

    2013-01-01

    Parity increases the risk for coronary heart disease; however, its association with metabolic syndrome among women in low-income countries is still unknown. This study investigates the association between parity or gravidity and metabolic syndrome in rural Bangladeshi women. A cross-sectional study was conducted in 1,219 women aged 15-75 years from rural Bangladesh. Metabolic syndrome was defined according to the standard NCEP-ATP III criteria. Logistic regression was used to estimate the association between parity and gravidity and metabolic syndrome, with adjustment of potential confounding variables. Subjects with the highest gravidity (> = 4) had 1.66 times higher odds of having metabolic syndrome compared to those in the lowest gravidity (0-1) (P trend = 0.02). A similar association was found between parity and metabolic syndrome (P(trend) = 0.04), i.e., subjects in the highest parity (> = 4) had 1.65 times higher odds of having metabolic syndrome compared to those in the lowest parity (0-1). This positive association of parity and gravidity with metabolic syndrome was confined to pre-menopausal women (P(trend) <0.01). Among the components of metabolic syndrome only high blood pressure showed positive association with parity and gravidity (P(trend) = 0.01 and <0.001). Neither Parity nor gravidity was appreciably associated with other components of metabolic syndrome. Multi parity or gravidity may be a risk factor for metabolic syndrome.

  4. Lake water quality mapping from LANDSAT

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.

    1977-01-01

    The lakes in three LANDSAT scenes were mapped by the Bendix MDAS multispectral analysis system. Field checking the maps by three separate individuals revealed approximately 90-95% correct classification for the lake categories selected. Variations between observers was about 5%. From the MDAS color coded maps the lake with the worst algae problem was easily located. This lake was closely checked and a pollution source of 100 cows was found in the springs which fed this lake. The theory, lab work and field work which made it possible for this demonstration project to be a practical lake classification procedure are presented.

  5. Permutation parity machines for neural cryptography.

    PubMed

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  6. Permutation parity machines for neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyes, Oscar Mauricio; Escuela de Ingenieria Electrica, Electronica y Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga; Zimmermann, Karl-Heinz

    2010-06-15

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  7. Linking high parity and maternal and child mortality: what is the impact of lower health services coverage among higher order births?

    PubMed

    Sonneveldt, Emily; DeCormier Plosky, Willyanne; Stover, John

    2013-01-01

    A number of data sets show that high parity births are associated with higher child mortality than low parity births. The reasons for this relationship are not clear. In this paper we investigate whether high parity is associated with lower coverage of key health interventions that might lead to increased mortality. We used DHS data from 10 high fertility countries to examine the relationship between parity and coverage for 8 child health intervention and 9 maternal health interventions. We also used the LiST model to estimate the effect on maternal and child mortality of the lower coverage associated with high parity births. Our results show a significant relationship between coverage of maternal and child health services and birth order, even when controlling for poverty. The association between coverage and parity for maternal health interventions was more consistently significant across countries all countries, while for child health interventions there were fewer overall significant relationships and more variation both between and within countries. The differences in coverage between children of parity 3 and those of parity 6 are large enough to account for a 12% difference in the under-five mortality rate and a 22% difference in maternal mortality ratio in the countries studied. This study shows that coverage of key health interventions is lower for high parity children and the pattern is consistent across countries. This could be a partial explanation for the higher mortality rates associated with high parity. Actions to address this gap could help reduce the higher mortality experienced by high parity birth.

  8. Morse Code, Scrabble, and the Alphabet

    ERIC Educational Resources Information Center

    Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss

    2004-01-01

    In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…

  9. NDEC: A NEA platform for nuclear data testing, verification and benchmarking

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Michel-Sendis, F.; Cabellos, O.; Bossant, M.; Soppera, N.

    2017-09-01

    The selection, testing, verification and benchmarking of evaluated nuclear data consists, in practice, in putting an evaluated file through a number of checking steps where different computational codes verify that the file and the data it contains complies with different requirements. These requirements range from format compliance to good performance in application cases, while at the same time physical constraints and the agreement with experimental data are verified. At NEA, the NDEC (Nuclear Data Evaluation Cycle) platform aims at providing, in a user friendly interface, a thorough diagnose of the quality of a submitted evaluated nuclear data file. Such diagnose is based on the results of different computational codes and routines which carry out the mentioned verifications, tests and checks. NDEC also searches synergies with other existing NEA tools and databases, such as JANIS, DICE or NDaST, including them into its working scheme. Hence, this paper presents NDEC, its current development status and its usage in the JEFF nuclear data project.

  10. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  11. Monitoring Java Programs with Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We present recent work on the development Java PathExplorer (JPAX), a tool for monitoring the execution of Java programs. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program's late code which will then omit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, for example concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in the Maude rewriting logic, and then can either be directly checked using the Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

  12. [The DRG responsible physician in trauma and orthopedic surgery. Surgeon, encoder, and link to medical controlling].

    PubMed

    Ruffing, T; Huchzermeier, P; Muhm, M; Winkler, H

    2014-05-01

    Precise coding is an essential requirement in order to generate a valid DRG. The aim of our study was to evaluate the quality of the initial coding of surgical procedures, as well as to introduce our "hybrid model" of a surgical specialist supervising medical coding and a nonphysician for case auditing. The department's DRG responsible physician as a surgical specialist has profound knowledge both in surgery and in DRG coding. At a Level 1 hospital, 1000 coded cases of surgical procedures were checked. In our department, the DRG responsible physician who is both a surgeon and encoder has proven itself for many years. The initial surgical DRG coding had to be corrected by the DRG responsible physician in 42.2% of cases. On average, one hour per working day was necessary. The implementation of a DRG responsible physician is a simple, effective way to connect medical and business expertise without interface problems. Permanent feedback promotes both medical and economic sensitivity for the improvement of coding quality.

  13. An international survey of building energy codes and their implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  14. The influence of state-level policy environments on the activation of the Medicaid SBIRT reimbursement codes.

    PubMed

    Hinde, Jesse; Bray, Jeremy; Kaiser, David; Mallonee, Erin

    2017-02-01

    To examine how institutional constraints, comprising federal actions and states' substance abuse policy environments, influence states' decisions to activate Medicaid reimbursement codes for screening and brief intervention for risky substance use in the United States. A discrete-time duration model was used to estimate the effect of institutional constraints on the likelihood of activating the Medicaid reimbursement codes. Primary constraints included federal Screening, Brief Intervention and Referral to Treatment (SBIRT) grant funding, substance abuse priority, economic climate, political climate and interstate diffusion. Study data came from publicly available secondary data sources. Federal SBIRT grant funding did not affect significantly the likelihood of activation (P = 0.628). A $1 increase in per-capita block grant funding was associated with a 10-percentage point reduction in the likelihood of activation (P = 0.003) and a $1 increase in per-capita state substance use disorder expenditures was associated with a 2-percentage point increase in the likelihood of activation (P = 0.004). States with enacted parity laws (P = 0.016) and a Democratic-controlled state government were also more likely to activate the codes. In the United States, the determinants of state activation of Medicaid Screening, Brief Intervention and Referral to Treatment (SBIRT) reimbursement codes are complex, and include more than financial considerations. Federal block grant funding is a strong disincentive to activating the SBIRT reimbursement codes, while more direct federal SBIRT grant funding has no detectable effects. © 2017 Society for the Study of Addiction.

  15. Higher Gravidity and Parity Are Associated with Increased Prevalence of Metabolic Syndrome among Rural Bangladeshi Women

    PubMed Central

    Akter, Shamima; Jesmin, Subrina; Rahman, Md. Mizanur; Islam, Md. Majedul; Khatun, Most. Tanzila; Yamaguchi, Naoto; Akashi, Hidechika; Mizutani, Taro

    2013-01-01

    Background Parity increases the risk for coronary heart disease; however, its association with metabolic syndrome among women in low-income countries is still unknown. Objective This study investigates the association between parity or gravidity and metabolic syndrome in rural Bangladeshi women. Methods A cross-sectional study was conducted in 1,219 women aged 15–75 years from rural Bangladesh. Metabolic syndrome was defined according to the standard NCEP-ATP III criteria. Logistic regression was used to estimate the association between parity and gravidity and metabolic syndrome, with adjustment of potential confounding variables. Results Subjects with the highest gravidity (> = 4) had 1.66 times higher odds of having metabolic syndrome compared to those in the lowest gravidity (0-1) (P trend = 0.02). A similar association was found between parity and metabolic syndrome (P trend = 0.04), i.e., subjects in the highest parity (> = 4) had 1.65 times higher odds of having metabolic syndrome compared to those in the lowest parity (0-1). This positive association of parity and gravidity with metabolic syndrome was confined to pre-menopausal women (P trend <0.01). Among the components of metabolic syndrome only high blood pressure showed positive association with parity and gravidity (P trend = 0.01 and <0.001). Neither Parity nor gravidity was appreciably associated with other components of metabolic syndrome. Conclusions Multi parity or gravidity may be a risk factor for metabolic syndrome. PMID:23936302

  16. The relationship between parity and overweight varies with household wealth and national development.

    PubMed

    Kim, Sonia A; Yount, Kathryn M; Ramakrishnan, Usha; Martorell, Reynaldo

    2007-02-01

    Recent studies support a positive relationship between parity and overweight among women of developing countries; however, it is unclear whether these effects vary by household wealth and national development. Our objective was to determine whether the association between parity and overweight [body mass index (BMI) > or =25 kg/m(2)] in women living in developing countries varies with levels of national human development and/or household wealth. We used data from 28 nationally representative, cross-sectional surveys conducted between 1996 and 2003 (n = 275 704 women, 15-49 years). The relationship between parity and overweight was modelled using logistic regression, controlling for several biological and sociodemographic factors and national development, as reflected by the United Nations' Human Development Index. We also modelled the interaction between parity and national development, and the three-way interaction between parity, household wealth and national development. Parity had a weak, positive association with overweight, which varied by household wealth and national development. Among the poorest women and women in the second tertile of household wealth, parity was positively related to overweight only in the most developed countries. Among the wealthiest women, parity was positively related to overweight regardless of the level of national development. As development increases, the burden of parity-related overweight shifts to include poor as well as wealthy women. In the least-developed countries, programmes to prevent parity-related overweight should target wealthy women, whereas such programmes should be provided to all women in more developed countries.

  17. Investigating Sociodemographic Disparities in Cancer Risk Using Web-Based Informatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hong-Jun; Tourassi, Georgia

    Cancer health disparities due to demographic and socioeconomic factors are an area of great interest in the epidemiological community. Adjusting for such factors is important when developing cancer risk models. However, for digital epidemiology studies relying on online sources such information is not readily available. This paper presents a novel method for extracting demographic and socioeconomic information from openly available online obituaries. The method relies on tailored language processing rules and a probabilistic scheme to map subjects’ occupation history to the occupation classification codes and related earnings provided by the U.S. Census Bureau. Using this information, a case-control study ismore » executed fully in silico to investigate how age, gender, parity, and income level impact breast and lung cancer risk. Based on 48,368 online obituaries (4,643 for breast cancer, 6,274 for lung cancer, and 37,451 cancer-free) collected automatically and a generalized cancer risk model, our study shows strong association between age, parity, and socioeconomic status and cancer risk. Although for breast cancer the observed trends are very consistent with traditional epidemiological studies, some inconsistency is observed for lung cancer with respect to socioeconomic status.« less

  18. A Quantum Non-Demolition Parity measurement in a mixed-species trapped-ion quantum processor

    NASA Astrophysics Data System (ADS)

    Marinelli, Matteo; Negnevitsky, Vlad; Lo, Hsiang-Yu; Flühmann, Christa; Mehta, Karan; Home, Jonathan

    2017-04-01

    Quantum non-demolition measurements of multi-qubit systems are an important tool in quantum information processing, in particular for syndrome extraction in quantum error correction. We have recently demonstrated a protocol for quantum non-demolition measurement of the parity of two beryllium ions by detection of a co-trapped calcium ion. The measurement requires a sequence of quantum gates between the three ions, using mixed-species gates between beryllium hyperfine qubits and a calcium optical qubit. Our work takes place in a multi-zone segmented trap setup in which we have demonstrated high fidelity control of both species and multi-well ion shuttling. The advantage of using two species of ion is that we can individually manipulate and read out the state of each ion species without disturbing the internal state of the other. The methods demonstrated here can be used for quantum error correcting codes as well as quantum metrology and are key ingredients for realizing a hybrid universal quantum computer based on trapped ions. Mixed-species control may also enable the investigation of new avenues in quantum simulation and quantum state control. left the group and working in a company now.

  19. Investigating Sociodemographic Disparities in Cancer Risk Using Web-Based Informatics

    DOE PAGES

    Yoon, Hong-Jun; Tourassi, Georgia

    2018-01-24

    Cancer health disparities due to demographic and socioeconomic factors are an area of great interest in the epidemiological community. Adjusting for such factors is important when developing cancer risk models. However, for digital epidemiology studies relying on online sources such information is not readily available. This paper presents a novel method for extracting demographic and socioeconomic information from openly available online obituaries. The method relies on tailored language processing rules and a probabilistic scheme to map subjects’ occupation history to the occupation classification codes and related earnings provided by the U.S. Census Bureau. Using this information, a case-control study ismore » executed fully in silico to investigate how age, gender, parity, and income level impact breast and lung cancer risk. Based on 48,368 online obituaries (4,643 for breast cancer, 6,274 for lung cancer, and 37,451 cancer-free) collected automatically and a generalized cancer risk model, our study shows strong association between age, parity, and socioeconomic status and cancer risk. Although for breast cancer the observed trends are very consistent with traditional epidemiological studies, some inconsistency is observed for lung cancer with respect to socioeconomic status.« less

  20. Extended magnetohydrodynamic simulations of field reversed configuration formation and sustainment with rotating magnetic field current drive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milroy, R. D.; Kim, C. C.; Sovinec, C. R.

    Three-dimensional simulations of field reversed configuration (FRC) formation and sustainment with rotating magnetic field (RMF) current drive have been performed with the NIMROD code [C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)]. The Hall term is a zeroth order effect with strong coupling between Fourier components, and recent enhancements to the NIMROD preconditioner allow much larger time steps than was previously possible. Boundary conditions to capture the effects of a finite length RMF antenna have been added, and simulations of FRC formation from a uniform background plasma have been performed with parameters relevant to the translation, confinement,more » and sustainment-upgrade experiment at the University of Washington [H. Y. Guo, A. L. Hoffman, and R. D. Milroy, Phys. Plasmas 14, 112502 (2007)]. The effects of both even-parity and odd-parity antennas have been investigated, and there is no evidence of a disruptive instability for either antenna type. It has been found that RMF effects extend considerably beyond the ends of the antenna, and that a large n=0 B{sub t}heta can develop in the open-field line region, producing a back torque opposing the RMF.« less

  1. High-efficient entanglement distillation from photon loss and decoherence.

    PubMed

    Wang, Tie-Jun; Wang, Chuan

    2015-11-30

    We illustrate an entanglement distillation protocol (EDP) for a mixed photon-ensemble which composed of four kinds of entangled states and vacuum states. Exploiting the linear optics and local entanglement resource (four-qubit entangled GHZ state), we design the nondemolition parity-checking and qubit amplifying (PCQA) setup for photonic polarization degree of freedom which are the key device of our scheme. With the PCQA setup, a high-fidelity entangled photon-pair system can be achieved against the transmission losses and the decoherence in noisy channels. And in the available purification range for our EDP, the fidelity of this ensemble can be improved to the maximal value through iterated operations. Compared to the conventional entanglement purification schemes, our scheme largely reduces the initialization requirement of the distilled mixed quantum system, and overcomes the difficulties posed by inherent channel losses during photon transmission. All these advantages make this scheme more useful in the practical applications of long-distance quantum communication.

  2. Entanglement distillation for quantum communication network with atomic-ensemble memories.

    PubMed

    Li, Tao; Yang, Guo-Jian; Deng, Fu-Guo

    2014-10-06

    Atomic ensembles are effective memory nodes for quantum communication network due to the long coherence time and the collective enhancement effect for the nonlinear interaction between an ensemble and a photon. Here we investigate the possibility of achieving the entanglement distillation for nonlocal atomic ensembles by the input-output process of a single photon as a result of cavity quantum electrodynamics. We give an optimal entanglement concentration protocol (ECP) for two-atomic-ensemble systems in a partially entangled pure state with known parameters and an efficient ECP for the systems in an unknown partially entangled pure state with a nondestructive parity-check detector (PCD). For the systems in a mixed entangled state, we introduce an entanglement purification protocol with PCDs. These entanglement distillation protocols have high fidelity and efficiency with current experimental techniques, and they are useful for quantum communication network with atomic-ensemble memories.

  3. Analysis of the Intel 386 and i486 microprocessors for the Space Station Freedom Data Management System

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei

    1991-01-01

    The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/386 DX memory hierachy appears to be the most beneficial change to the current DMS design at this time.

  4. Analysis of the Intel 386 and i486 microprocessors for the Space Station Freedom Data Management System

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei

    1991-01-01

    The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/387 DX memory hierarchy appears to be the most beneficial change to the current DMS design at this time.

  5. Markov chain algorithms: a template for building future robust low-power systems

    PubMed Central

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  6. 78 FR 58973 - Airworthiness Directives; Dassault Aviation Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-25

    ... 2000EX type design are included in Dassault Aviation Falcon 2000EX (F2000EX) Aircraft Maintenance Manual... numbers. (d) Subject Air Transport Association (ATA) of America Code 05, Time Limits/ Maintenance Checks... to introduce a corrosion prevention control program, among other changes, to the maintenance...

  7. CTEPP STANDARD OPERATING PROCEDURE FOR PROCESSING COMPLETED DATA FORMS (SOP-4.10)

    EPA Science Inventory

    This SOP describes the methods for processing completed data forms. Key components of the SOP include (1) field editing, (2) data form Chain-of-Custody, (3) data processing verification, (4) coding, (5) data entry, (6) programming checks, (7) preparation of data dictionaries, cod...

  8. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  9. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  10. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  11. Odd-even parity splittings and octupole correlations in neutron-rich Ba isotopes

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Wang, H.; Wang, L.-J.; Yao, J. M.

    2018-02-01

    The odd-even parity splittings in low-lying parity-doublet states of atomic nuclei with octupole correlations have usually been interpreted as rotational excitations on top of octupole vibration in the language of collective models. In this paper, we report a deep analysis of the odd-even parity splittings in the parity-doublet states of neutron-rich Ba isotopes around neutron number N =88 within a full microscopic framework of beyond-mean-field multireference covariant energy density functional theory. The dynamical correlations related to symmetry restoration and quadrupole-octupole shape fluctuation are taken into account with a generator coordinate method combined with parity, particle-number, and angular-momentum projections. We show that the behavior of odd-even parity splittings is governed by the interplay of rotation, quantum tunneling, and shape evolution. Similar to 224Ra, a picture of rotation-induced octupole shape stabilization in the positive-parity states is exhibited in the neutron-rich Ba isotopes.

  12. Design of pellet surface grooves for fission gas plenum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, T.J.; Jones, L.R.; Macici, N.

    1986-01-01

    In the Canada deuterium uranium pressurized heavy water reactor, short (50-cm) Zircaloy-4 clad bundles are fueled on-power. Although internal void volume within the fuel rods is adequate for the present once-through natural uranium cycle, the authors have investigated methods for increasing the internal gas storage volume needed in high-power, high-burnup, experimental ceramic fuels. This present work sought to prove the methodology for design of gas storage volume within the fuel pellets - specifically the use of grooves pressed or machined into the relatively cool pellet/cladding interface. Preanalysis and design of pellet groove shape and volume was accomplished using the TRUMPmore » heat transfer code. Postirradiation examination (PIE) was used to check the initial design and heat transfer assumptions. Fission gas release was found to be higher for the grooved pellet rods than for the comparison rods with hollow or unmodified pellets. This had been expected from the initial TRUMP thermal analyses. The ELESIM fuel modeling code was used to check in-reactor performance, but some modifications were necessary to accommodate the loss of heat transfer surface to the grooves. It was concluded that for plenum design purposes, circumferential pellet grooves could be adequately modeled by the codes TRUMP and ELESIM.« less

  13. DYNA3D/ParaDyn Regression Test Suite Inventory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jerry I.

    2016-09-01

    The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of preliminary release 16.1 in September 2016. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark (√) in the corresponding column. The definition of “feature” has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to amore » particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors, except problems involving features only available in serial mode. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds; compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.« less

  14. Phase II evaluation of clinical coding schemes: completeness, taxonomy, mapping, definitions, and clarity. CPRI Work Group on Codes and Structures.

    PubMed

    Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J

    1997-01-01

    To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.

  15. Towards a heralded eigenstate-preserving measurement of multi-qubit parity in circuit QED

    NASA Astrophysics Data System (ADS)

    Huembeli, Patrick; Nigg, Simon E.

    2017-07-01

    Eigenstate-preserving multi-qubit parity measurements lie at the heart of stabilizer quantum error correction, which is a promising approach to mitigate the problem of decoherence in quantum computers. In this work we explore a high-fidelity, eigenstate-preserving parity readout for superconducting qubits dispersively coupled to a microwave resonator, where the parity bit is encoded in the amplitude of a coherent state of the resonator. Detecting photons emitted by the resonator via a current biased Josephson junction yields information about the parity bit. We analyze theoretically the measurement back action in the limit of a strongly coupled fast detector and show that in general such a parity measurement, while approximately quantum nondemolition is not eigenstate preserving. To remediate this shortcoming we propose a simple dynamical decoupling technique during photon detection, which greatly reduces decoherence within a given parity subspace. Furthermore, by applying a sequence of fast displacement operations interleaved with the dynamical decoupling pulses, the natural bias of this binary detector can be efficiently suppressed. Finally, we introduce the concept of a heralded parity measurement, where a detector click guarantees successful multi-qubit parity detection even for finite detection efficiency.

  16. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  17. Multiparity evaluation of calving ease and stillbirth with separate genetic effects by parity.

    PubMed

    Wiggans, G R; Cole, J B; Thornton, L L M

    2008-08-01

    Evaluations that analyze first and later parities as correlated traits were developed separately for calving ease (CE) from over 15 million calving records of Holsteins, Brown Swiss, and Holstein-Brown Swiss crossbreds and for stillbirth (SB) from 7.4 million of the Holstein CE records. Calving ease was measured on a scale of 1 (no difficulty) to 5 (difficult birth); SB status was designated as live or dead within 48 h. Scores for CE and SB were transformed separately for each trait by parity (first or later) and calf sex (male or female) and converted to a unit standard deviation scale. For variance component estimation, Holstein data were selected for the 2,968 bulls with the most records as sire or maternal grandsire (MGS). Six samples were selected by herd; samples ranged in size from 97,756 to 146,138 records. A multiparity sire-MGS model was used to calculate evaluations separately for CE and for SB with first and later parities as correlated traits. Fixed effects were year-season, calf sex, and sire and MGS birth years; random effects were herd-year interaction, sire, and MGS. For later parities, sex effects were separated by parity. The genetic correlation between first and later parities was 0.79 for sire and 0.81 for MGS for CE, and 0.83 for sire and 0.74 for MGS for SB. For national CE evaluations, which also include Brown Swiss, a fixed effect for breed was added to the model. Correlations between solutions on the underlying scale from the January 2008 USDA CE evaluation with those from the multiparity analysis for CE were 0.89 and 0.91 for first- and later-parity sire effects and 0.71 and 0.88 for first- and later-parity MGS effects; the larger value for later parity reflects that later parities comprised 64% of the data. Corresponding correlations for SB were 0.81 and 0.82 for first- and later-parity sire effects and 0.46 and 0.83 for first- and later-parity MGS effects, respectively. Correlations were higher when only bulls with a multiparity reliability of >65% were included. The multiparity analysis accounted for genetic differences in calving performance between first and later parities. Evaluations should become more stable as the portion of a bull's observations from different parities changes over his lifetime. Accuracy of the net merit index can be improved by adjusting weights to use evaluations for separate parities optimally.

  18. The effect of parity on maternal body mass index, plasma mineral element status and new-born anthropometrics.

    PubMed

    Ugwuja, Emmanuel I; Nnabu, Richard C; Ezeonu, Paul O; Uro-Chukwu, Henry

    2015-09-01

    Adverse pregnancy outcome is an important public health problem that has been partly associated with increasing maternal parity. To determine the effect of parity on maternal body mass index (BMI), mineral element status and newborn anthropometrics. Data for 349 pregnant women previously studied for the impacts of maternal plasma mineral element status on pregnancy and its outcomes was analysed. Obstetric and demographic data and 5mls of blood samples were obtained from each subject. Blood lead, plasma copper, iron and zinc were determined using atomic absorption spectrophotometer. Maternal BMI increases with parity. Women with parity two had significantly higher plasma zinc but lower plasma copper with comparable levels of the elements in nulliparous and higher parity groups. Although plasma iron was comparable among the groups, blood lead was significantly higher in parity > three. Newborn birth length increases with parity with a positive correlation between parity and maternal BMI (r = 0.221; p = 0.001) and newborn birth length (r = 0.170; p = 0.002) while plasma copper was negatively correlated with newborn's head circumference (r = -0.115; p = 0.040). It is plausible that parity affects maternal BMI and newborn anthropometrics through alterations in maternal plasma mineral element levels. While further studies are desired to confirm the present findings, there is need for pregnant and would-be pregnant women to diversify their diet to optimize their mineral element status.

  19. High lifetime and reproductive performance of sows on southern European Union commercial farms can be predicted by high numbers of pigs born alive in parity one.

    PubMed

    Iida, R; Piñeiro, C; Koketsu, Y

    2015-05-01

    Our objectives were 1) to compare reproductive performance across parity and lifetime performance in sow groups categorized by the number of pigs born alive (PBA) in parity 1 and 2) to examine the factors associated with more PBA in parity 1. We analyzed 476,816 parity records and 109,373 lifetime records of sows entered into 125 herds from 2008 to 2010. Sows were categorized into 4 groups based on the 10th, 50th, and 90th percentiles of PBA in parity 1 as follows: 7 pigs or fewer, 8 to 11 pigs, 12 to 14 pigs, and 15 pigs or more. Generalized linear models were applied to the data. For reproductive performance across parity, sows that had 15 or more PBA in parity 1 had 0.5 to 1.8 more PBA in any subsequent parity than the other 3 PBA groups ( P< 0.05). In addition, they had 2.8 to 5.4% higher farrowing rates in parities 1 through 3 than sows that had 7 or fewer PBA (P < 0.05). However, there were no differences between the sow PBA groups for weaning-to-first-mating interval in any parity (P ≥ 0.37). For lifetime performance, sows that had 15 or more PBA in parity 1 had 4.4 to 26.1 more lifetime PBA than sows that had 14 or fewer PBA (P < 0.05). Also, for sows that had 14 or fewer PBA in parity 1, those that were first mated at 229 d old (25th percentile) or earlier had 2.9 to 3.3 more lifetime PBA than those first mated at 278 d old (75th percentile) or later (P < 0.05). Factors associated with fewer PBA in parity 1 were summer mating and lower age of gilts at first mating (AFM; P < 0.05) but not reservice occurrences (P = 0.34). Additionally, there was a 2-way interaction between mated month groups and AFM for PBA in parity 1 (P < 0.05); PBA in parity 1 sows mated from July to December increased nonlinearly by 0.3 to 0.4 pigs when AFM increased from 200 to 310 d old (P < 0.05). However, the same rise in AFM had no significant effect on the PBA of sows mated between January and June (P ≥ 0.17). In conclusion, high PBA in parity 1 can be used to predict that a sow will have high reproductive performance and lifetime performance. Also, the data indicate that the upper limit of AFM for mating between July and December should be 278 d old.

  20. Short interpregnancy interval and low birth weight: A role of parity.

    PubMed

    Merklinger-Gruchala, Anna; Jasienska, Grazyna; Kapiszewska, Maria

    2015-01-01

    Short interpregnancy intervals (IPI) and high parity may be synergistically associated with the risk of unfavorable pregnancy outcomes. This study tests if the effect of short IPI on the odds ratio for low birth weight (LBW, <2,500 g) differs across parity status. The study was carried out on the birth registry sample of almost 40,000 singleton, live-born infants who were delivered between the years 1995 and 2009 to multiparous mothers whose residence at the time of infant's birth was the city of Krakow. Multiple logistic regression analyses were used for testing the effect of IPI on the odds ratio (OR) for LBW, after controlling for employment, educational and marital status, parity, sex of the child, maternal and gestational age. Stratified analyses (according to parity) and tests for interaction were performed. Very short IPI (0-5 months) was associated with an increased OR for LBW, but only among high parity mothers with three or more births (OR = 2.64; 95% CI 1.45-4.80). The test for interaction between very short IPI and parity on the OR for LBW was statistically significant after adjustment for multiple comparisons (P = 0.04). Among low parity mothers (two births) no statistically significant associations were found between IPI and LBW after standardization. Parity may modify the association between short birth spacing and LBW. Women with very short IPI and high parity may have a higher risk of having LBW infants than those with very short IPI but low parity. © 2015 Wiley Periodicals, Inc.

  1. A SNP panel and online tool for checking genotype concordance through comparing QR codes.

    PubMed

    Du, Yonghong; Martin, Joshua S; McGee, John; Yang, Yuchen; Liu, Eric Yi; Sun, Yingrui; Geihs, Matthias; Kong, Xuejun; Zhou, Eric Lingfeng; Li, Yun; Huang, Jie

    2017-01-01

    In the current precision medicine era, more and more samples get genotyped and sequenced. Both researchers and commercial companies expend significant time and resources to reduce the error rate. However, it has been reported that there is a sample mix-up rate of between 0.1% and 1%, not to mention the possibly higher mix-up rate during the down-stream genetic reporting processes. Even on the low end of this estimate, this translates to a significant number of mislabeled samples, especially over the projected one billion people that will be sequenced within the next decade. Here, we first describe a method to identify a small set of Single nucleotide polymorphisms (SNPs) that can uniquely identify a personal genome, which utilizes allele frequencies of five major continental populations reported in the 1000 genomes project and the ExAC Consortium. To make this panel more informative, we added four SNPs that are commonly used to predict ABO blood type, and another two SNPs that are capable of predicting sex. We then implement a web interface (http://qrcme.tech), nicknamed QRC (for QR code based Concordance check), which is capable of extracting the relevant ID SNPs from a raw genetic data, coding its genotype as a quick response (QR) code, and comparing QR codes to report the concordance of underlying genetic datasets. The resulting 80 fingerprinting SNPs represent a significant decrease in complexity and the number of markers used for genetic data labelling and tracking. Our method and web tool is easily accessible to both researchers and the general public who consider the accuracy of complex genetic data as a prerequisite towards precision medicine.

  2. A SNP panel and online tool for checking genotype concordance through comparing QR codes

    PubMed Central

    Du, Yonghong; Martin, Joshua S.; McGee, John; Yang, Yuchen; Liu, Eric Yi; Sun, Yingrui; Geihs, Matthias; Kong, Xuejun; Zhou, Eric Lingfeng; Li, Yun

    2017-01-01

    In the current precision medicine era, more and more samples get genotyped and sequenced. Both researchers and commercial companies expend significant time and resources to reduce the error rate. However, it has been reported that there is a sample mix-up rate of between 0.1% and 1%, not to mention the possibly higher mix-up rate during the down-stream genetic reporting processes. Even on the low end of this estimate, this translates to a significant number of mislabeled samples, especially over the projected one billion people that will be sequenced within the next decade. Here, we first describe a method to identify a small set of Single nucleotide polymorphisms (SNPs) that can uniquely identify a personal genome, which utilizes allele frequencies of five major continental populations reported in the 1000 genomes project and the ExAC Consortium. To make this panel more informative, we added four SNPs that are commonly used to predict ABO blood type, and another two SNPs that are capable of predicting sex. We then implement a web interface (http://qrcme.tech), nicknamed QRC (for QR code based Concordance check), which is capable of extracting the relevant ID SNPs from a raw genetic data, coding its genotype as a quick response (QR) code, and comparing QR codes to report the concordance of underlying genetic datasets. The resulting 80 fingerprinting SNPs represent a significant decrease in complexity and the number of markers used for genetic data labelling and tracking. Our method and web tool is easily accessible to both researchers and the general public who consider the accuracy of complex genetic data as a prerequisite towards precision medicine. PMID:28926565

  3. Domain wall fermion QCD with the exact one flavor algorithm

    DOE PAGES

    Jung, C.; Kelly, C.; Mawhinney, R. D.; ...

    2018-03-13

    Lattice QCD calculations including the effects of one or more nondegenerate sea quark flavors are conventionally performed using the rational hybrid Monte Carlo (RHMC) algorithm, which computes the square root of the determinant ofmore » $${\\mathcal{D}}^{\\dagger{}}\\mathcal{D}$$, where $$\\mathcal{D}$$ is the Dirac operator. The special case of two degenerate quark flavors with the same mass is described directly by the determinant of $${\\mathcal{D}}^{\\dagger{}}\\mathcal{D}$$—in particular, no square root is necessary—enabling a variety of algorithmic developments, which have driven down the cost of simulating the light (up and down) quarks in the isospin-symmetric limit of equal masses. As a result, the relative cost of single quark flavors—such as the strange or charm—computed with RHMC has become more expensive. This problem is even more severe in the context of our measurements of the $$\\mathrm{{\\Delta}}I=1/2$$ $$K{\\rightarrow}{\\pi}{\\pi}$$ matrix elements on lattice ensembles with $G$-parity boundary conditions, since $G$-parity is associated with a doubling of the number of quark flavors described by $$\\mathcal{D}$$ , and thus RHMC is needed for the isospin-symmetric light quarks as well. In this paper we report on our implementation of the exact one flavor algorithm (EOFA) introduced by the TWQCD Collaboration for simulations including single flavors of domain wall quarks. We have developed a new preconditioner for the EOFA Dirac equation, which both reduces the cost of solving the Dirac equation and allows us to reuse the bulk of our existing high-performance code. Coupling these improvements with careful tuning of our integrator, the time per accepted trajectory in the production of our $2+1$ flavor $G$-parity ensembles with physical pion and kaon masses has been decreased by a factor of 4.2.« less

  4. Domain wall fermion QCD with the exact one flavor algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, C.; Kelly, C.; Mawhinney, R. D.

    Lattice QCD calculations including the effects of one or more nondegenerate sea quark flavors are conventionally performed using the rational hybrid Monte Carlo (RHMC) algorithm, which computes the square root of the determinant ofmore » $${\\mathcal{D}}^{\\dagger{}}\\mathcal{D}$$, where $$\\mathcal{D}$$ is the Dirac operator. The special case of two degenerate quark flavors with the same mass is described directly by the determinant of $${\\mathcal{D}}^{\\dagger{}}\\mathcal{D}$$—in particular, no square root is necessary—enabling a variety of algorithmic developments, which have driven down the cost of simulating the light (up and down) quarks in the isospin-symmetric limit of equal masses. As a result, the relative cost of single quark flavors—such as the strange or charm—computed with RHMC has become more expensive. This problem is even more severe in the context of our measurements of the $$\\mathrm{{\\Delta}}I=1/2$$ $$K{\\rightarrow}{\\pi}{\\pi}$$ matrix elements on lattice ensembles with $G$-parity boundary conditions, since $G$-parity is associated with a doubling of the number of quark flavors described by $$\\mathcal{D}$$ , and thus RHMC is needed for the isospin-symmetric light quarks as well. In this paper we report on our implementation of the exact one flavor algorithm (EOFA) introduced by the TWQCD Collaboration for simulations including single flavors of domain wall quarks. We have developed a new preconditioner for the EOFA Dirac equation, which both reduces the cost of solving the Dirac equation and allows us to reuse the bulk of our existing high-performance code. Coupling these improvements with careful tuning of our integrator, the time per accepted trajectory in the production of our $2+1$ flavor $G$-parity ensembles with physical pion and kaon masses has been decreased by a factor of 4.2.« less

  5. Effects of breed, parity, and folic Acid supplement on the expression of folate metabolism genes in endometrial and embryonic tissues from sows in early pregnancy.

    PubMed

    Vallée, Maud; Guay, Frédéric; Beaudry, Danièle; Matte, Jacques; Blouin, Richard; Laforest, Jean-Paul; Lessard, Martin; Palin, Marie-France

    2002-10-01

    Folic acid and glycine are factors of great importance in early gestation. In sows, folic acid supplement can increase litter size through a decrease in embryonic mortality, while glycine, the most abundant amino acid in the sow oviduct, uterine, and allantoic fluids, is reported to act as an organic osmoregulator. In this study, we report the characterization of cytoplasmic serine hydroxymethyltransferase (cSHMT), T-protein, and vT-protein (variant T-protein) mRNA expression levels in endometrial and embryonic tissues in gestating sows on Day 25 of gestation according to the breed, parity, and folic acid + glycine supplementation. Expression levels of cSHMT, T-protein, and vT-protein mRNA in endometrial and embryonic tissues were performed using semiquantitative reverse transcription-polymerase chain reaction. We also report, for the first time, an alternative splicing event in the porcine T-protein gene. Results showed that a T-protein splice variant, vT-protein, is present in all the tested sow populations. Further characterizations revealed that this T-protein splice variant contains a coding intron that can adopt a secondary structure. Results demonstrated that cSHMT mRNA expression levels were significantly higher in sows receiving the folic acid + glycine supplementation, independently of the breed or parity and in both endometrial and embryonic tissues. Upon receiving the same treatment, the vT-protein and T-protein mRNA expression levels were significantly reduced in the endometrial tissue of Yorkshire-Landrace sows only. These results indicate that modulation of specific gene expression levels in endometrial and embryonic tissues of sows in early gestation could be one of the mechanism involved with the role of folic acid on improving swine reproduction traits.

  6. Domain wall fermion QCD with the exact one flavor algorithm

    NASA Astrophysics Data System (ADS)

    Jung, C.; Kelly, C.; Mawhinney, R. D.; Murphy, D. J.

    2018-03-01

    Lattice QCD calculations including the effects of one or more nondegenerate sea quark flavors are conventionally performed using the rational hybrid Monte Carlo (RHMC) algorithm, which computes the square root of the determinant of D†D , where D is the Dirac operator. The special case of two degenerate quark flavors with the same mass is described directly by the determinant of D†D —in particular, no square root is necessary—enabling a variety of algorithmic developments, which have driven down the cost of simulating the light (up and down) quarks in the isospin-symmetric limit of equal masses. As a result, the relative cost of single quark flavors—such as the strange or charm—computed with RHMC has become more expensive. This problem is even more severe in the context of our measurements of the Δ I =1 /2 K →π π matrix elements on lattice ensembles with G -parity boundary conditions, since G -parity is associated with a doubling of the number of quark flavors described by D , and thus RHMC is needed for the isospin-symmetric light quarks as well. In this paper we report on our implementation of the exact one flavor algorithm (EOFA) introduced by the TWQCD Collaboration for simulations including single flavors of domain wall quarks. We have developed a new preconditioner for the EOFA Dirac equation, which both reduces the cost of solving the Dirac equation and allows us to reuse the bulk of our existing high-performance code. Coupling these improvements with careful tuning of our integrator, the time per accepted trajectory in the production of our 2 +1 flavor G -parity ensembles with physical pion and kaon masses has been decreased by a factor of 4.2.

  7. 47 CFR 51.215 - Dialing parity: Cost recovery.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) INTERCONNECTION Obligations of All Local Exchange Carriers § 51.215 Dialing parity: Cost recovery. (a) A LEC may recover the incremental costs necessary for the implementation of toll dialing parity... 47 Telecommunication 3 2010-10-01 2010-10-01 false Dialing parity: Cost recovery. 51.215 Section...

  8. Proof Compression and the Mobius PCC Architecture for Embedded Devices

    NASA Technical Reports Server (NTRS)

    Jensen, Thomas

    2009-01-01

    The EU Mobius project has been concerned with the security of Java applications, and of mobile devices such as smart phones that execute such applications. In this talk, I'll give a brief overview of the results obtained on on-device checking of various security-related program properties. I'll then describe in more detail how the concept of certified abstract interpretation and abstraction-carrying code can be applied to polyhedral-based analysis of Java byte code in order to verify properties pertaining to the usage of resources of a down-loaded application. Particular emphasis has been on finding ways of reducing the size of the certificates that accompany a piece of code.

  9. Measuring the anti-quark contribution to the proton spin using parity violating W production in polarized proton proton collisions

    NASA Astrophysics Data System (ADS)

    Gal, Ciprian

    Since the 1980s the spin puzzle has been at the heart of many experimental measurements. The initial discovery that only ~30% of the spin of the proton comes from quarks and anti-quarks has been refined and cross checked by several other deep inelastic scattering (DIS) and semi inclusive DIS (SIDIS) experiments. Through measurements of polarized parton distribution functions (PDFs) the individual contributions of the u, d, u, d, quarks have been measured. The flavor separation done in SIDIS experiments requires knowledge of fragmentation functions (FFs). However, due to the higher uncertainty of the anti-quark FFs compared to the quark FFs, the quark polarized PDFs (Deltau(x), Delta d(x)) are significantly better constrained than the anti-quark distributions (Deltau( x), Deltad(x). By accessing the anti-quarks directly through W boson production in polarized proton-proton collisions (ud → W+ → e+/mu+ and du→ W- → e-/mu-), the large FF uncertainties are avoided and a cleaner measurement can be done. The parity violating single spin asymmetry of the W decay leptons can be directly related to the polarized PDFs of the anti-quarks. The W+/- → e+/- measurement has been performed with the PHENIX central arm detectors at √s=510 GeV at the Relativistic Heavy Ion Collider (RHIC) and is presented in this thesis. Approximately 40 pb-1 of data from the 2011 and 2012 was analyzed and a large parity violating single spin asymmetry for W+/- has been measured. The combined data for 2011 and 2012 provide a single spin asymmetry for both charges: W+: -0.27 +/- 0.10(stat) +/- 0.01(syst) W -: 0.28 +/- 0.16(stat) +/- 0.02(syst) These results are consistent with the different theoretical predictions at the 1sigma level. The increased statistical precision enabled and required a more careful analysis of the background contamination for the this measurement. A method based on Gaussian Processes for Regression has been employed to determine this background contribution. This thesis contains a detailed description of the analysis together with the asymmetry results and future prospects.

  10. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  11. Phase II Evaluation of Clinical Coding Schemes

    PubMed Central

    Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith

    1997-01-01

    Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343

  12. Resolving Ethical Disputes Through Arbitration: An Alternative to Code Penalties.

    ERIC Educational Resources Information Center

    Barwis, Gail Lund

    Arbitration cases involving journalism ethics can be grouped into three major categories: outside activities that lead to conflicts of interest, acceptance of gifts that compromise journalistic objectivity, and writing false or misleading information or failing to check facts or correct errors. In most instances, failure to adhere to ethical…

  13. Clients' Preferences for Small Groups vs. Individual Testing.

    ERIC Educational Resources Information Center

    Backman, Margaret E.; And Others

    Test takers' preferences for group versus individual administration of the Micro-TOWER System of Vocational Evaluation are reported. The system was administered to 211 clients at a vocational rehabilitation center, and consisted of work samples measuring the following job skills: record checking, filing, lamp assembly, message-taking, zip coding,…

  14. OLIFE: Tight Binding Code for Transmission Coefficient Calculation

    NASA Astrophysics Data System (ADS)

    Mijbil, Zainelabideen Yousif

    2018-05-01

    A new and human friendly transport calculation code has been developed. It requires a simple tight binding Hamiltonian as the only input file and uses a convenient graphical user interface to control calculations. The effect of magnetic field on junction has also been included. Furthermore the transmission coefficient can be calculated between any two points on the scatterer which ensures high flexibility to check the system. Therefore Olife can highly be recommended as an essential tool for pretesting studying and teaching electron transport in molecular devices that saves a lot of time and effort.

  15. Parity partners in the baryon resonance spectrum

    DOE PAGES

    Lu, Ya; Chen, Chen; Roberts, Craig D.; ...

    2017-07-28

    Here, we describe a calculation of the spectrum of flavor-SU(3) octet and decuplet baryons, their parity partners, and the radial excitations of these systems, made using a symmetry-preserving treatment of a vector x vector contact interaction as the foundation for the relevant few-body equations. Dynamical chiral symmetry breaking generates nonpointlike diquarks within these baryons and hence, using the contact interaction, flavor-antitriplet scalar, pseudoscalar, vector, and flavor-sextet axial-vector quark-quark correlations can all play active roles. The model yields reasonable masses for all systems studied and Faddeev amplitudes for ground states and associated parity partners that sketch a realistic picture of theirmore » internal structure: ground-state, even-parity baryons are constituted, almost exclusively, from like-parity diquark correlations, but orbital angular momentum plays an important role in the rest-frame wave functions of odd-parity baryons, whose Faddeev amplitudes are dominated by odd-parity diquarks.« less

  16. Parity partners in the baryon resonance spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ya; Chen, Chen; Roberts, Craig D.

    Here, we describe a calculation of the spectrum of flavor-SU(3) octet and decuplet baryons, their parity partners, and the radial excitations of these systems, made using a symmetry-preserving treatment of a vector x vector contact interaction as the foundation for the relevant few-body equations. Dynamical chiral symmetry breaking generates nonpointlike diquarks within these baryons and hence, using the contact interaction, flavor-antitriplet scalar, pseudoscalar, vector, and flavor-sextet axial-vector quark-quark correlations can all play active roles. The model yields reasonable masses for all systems studied and Faddeev amplitudes for ground states and associated parity partners that sketch a realistic picture of theirmore » internal structure: ground-state, even-parity baryons are constituted, almost exclusively, from like-parity diquark correlations, but orbital angular momentum plays an important role in the rest-frame wave functions of odd-parity baryons, whose Faddeev amplitudes are dominated by odd-parity diquarks.« less

  17. Economic grand rounds: the price is right? Changes in the quantity of services used and prices paid in response to parity.

    PubMed

    Goldman, Howard H; Barry, Colleen L; Normand, Sharon-Lise T; Azzone, Vanessa; Busch, Alisa B; Huskamp, Haiden A

    2012-02-01

    The impact of parity coverage on the quantity of behavioral health services used by enrollees and on the prices of these services was examined in a set of Federal Employees Health Benefit (FEHB) Program plans. After parity implementation, the quantity of services used in the FEHB plans declined in five service categories, compared with plans that did not have parity coverage. The decline was significant for all service types except inpatient care. Because a previous study of the FEHB Program found that total spending on behavioral health services did not increase after parity implementation, it can be inferred that average prices must have increased over the period. The finding of a decline in service use and increase in prices provides an empirical window on what might be expected after implementation of the federal parity law and the parity requirement under the health care reform law.

  18. What Does Mental Health Parity Really Mean for the Care of People with Serious Mental Illness?

    PubMed

    Bartlett, John; Manderscheid, Ron

    2016-06-01

    Parity of mental health and substance abuse insurance benefits with medical care benefits, as well as parity in their management, are major ongoing concerns for adults with serious mental illness (SMI). The Mental Health Parity and Addiction Equity Act of 2008 guaranteed this parity of benefits and management in large private insurance plans and privately managed state Medicaid plans, but only if the benefits were offered at all. The Patient Protection and Affordable Care Act of 2010 extended parity to all persons receiving insurance through the state health insurance marketplaces, through the state Medicaid Expansions, and through new individual and small group plans. This article presents an analysis of how accessible parity has become for adults with SMI at both the system and personal levels several years after these legislative changes have been implemented. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Program Listings for EOSAEL 80-B and Ancillary Codes AGAUS and FLASH. Volume II. User’s Manual Supplement.

    DTIC Science & Technology

    1982-02-01

    FORMAT(1X,39NINVALID INPUTS TO CWIC. IERR-1 RETURNED) CWC01450 IERR=1 CWCO01460 RETURN CWC6OI470 20 TInE=RV 1) CWCOI4SO XO=RV( 2) CJC 01490 H3=RV( 3...ERROR CHECKS AGXO1910 IF (IT.LE.0) IT=I AGXO192o IF (IT.GT.JDIMCK(1)) CALL DIMER( ) AGX01930 IF(JDIMCKcI).LT.65) WRITE(IOUT, 1295 AGX0940 C CHECK FOR...Office System Planning Corporation ATTN: DACS -BMT (Colonel Harry F. Ennis) ATTN: COL Hank Shelton 5001 Eisenhower Avenue 1500 Wilson Boulevard

  20. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  1. Can R-parity violation hide vanilla supersymmetry at the LHC?

    NASA Astrophysics Data System (ADS)

    Asano, Masaki; Rolbiecki, Krzysztof; Sakurai, Kazuki

    2013-01-01

    Current experimental constraints on a large parameter space in supersymmetric models rely on the large missing energy signature. This is usually provided by the lightest neutralino which stability is ensured by R-parity. However, if R-parity is violated, the lightest neutralino decays into the standard model particles and the missing energy cut is not efficient anymore. In particular, the U DD type R-parity violation induces the neutralino decay to three quarks which potentially leads to the most difficult signal to be searched at hadron colliders. In this paper, we study the constraints on R-parity violating supersymmetric models using a same-sign dilepton and a multijet signatures. We show that the gluino and squarks lighter than TeV are already excluded in the constrained minimal supersymmetric standard model with the R-parity violation if their masses are approximately equal. We also analyze constraints in a simplified model with the R-parity violation. We compare how the R-parity violation changes some of the observables typically used to distinguish a supersymmetric signal from standard model backgrounds.

  2. In-Beam Gamma-ray Spectroscopy in the sdpf 37Ar Nucleus

    NASA Astrophysics Data System (ADS)

    Silveira, M. A. G.; Medina, N. H.; Seale, W. A.; Ribas, R. V.; de Oliveira, J. R. B.; Zilio, S.; Lenzi, S. M.; Napoli, D. R.; Marginean, N.; Vedova, F. Della; Farnea, E.; Ionescu-Bujor, M.; Iordachescu, A.

    2007-10-01

    The nucleus 37Ar has been studied with γ-ray spectroscopy in the 24Mg(16O,2pn) reaction at a beam energy of 70 MeV. Twenty two new excited states up to an excitation energy of 13 MeV have been observed. We compare the first negative and positive parity yrast states with large-scale-shell-model calculations using the Antoine code and the SDPF interaction, considering the excitation of the 1d5/2,2s1/2 and 1d3/2 nucleons to 1f7/2 and 2p3/2 in the sdpf valence space.

  3. The effect of parity on expenditures for individuals with severe mental illness.

    PubMed

    McConnell, K John

    2013-10-01

    To determine whether comprehensive behavioral health parity leads to changes in expenditures for individuals with severe mental illness (SMI), who are likely to be in greatest need for services that could be outside of health plans' traditional limitations on behavioral health care. We studied the effects of a comprehensive parity law enacted by Oregon in 2007. Using claims data, we compared expenditures for individuals in four Oregon commercial plans from 2005 through 2008 to a group of commercially insured individuals in Oregon who were exempt from parity. We used difference-in-differences and difference-in-difference-in-differences analyses to estimate changes in spending, and quantile regression methods to assess changes in the distribution of expenditures associated with parity. Among 2,195 individuals with SMI, parity was associated with increased expenditures for behavioral health services of $333 (95 percent CI $67, $615), without corresponding increases in out-of-pocket spending. The increase in expenditures was primarily attributable to shifts in the right tail of the distribution. Oregon's parity law led to higher average expenditures for individuals with SMI. Parity may allow individuals with high mental health needs to receive services that may have been limited without parity regulations. © Health Research and Educational Trust.

  4. Deterministic entanglement of superconducting qubits by parity measurement and feedback.

    PubMed

    Ristè, D; Dukalski, M; Watson, C A; de Lange, G; Tiggelman, M J; Blanter, Ya M; Lehnert, K W; Schouten, R N; DiCarlo, L

    2013-10-17

    The stochastic evolution of quantum systems during measurement is arguably the most enigmatic feature of quantum mechanics. Measuring a quantum system typically steers it towards a classical state, destroying the coherence of an initial quantum superposition and the entanglement with other quantum systems. Remarkably, the measurement of a shared property between non-interacting quantum systems can generate entanglement, starting from an uncorrelated state. Of special interest in quantum computing is the parity measurement, which projects the state of multiple qubits (quantum bits) to a state with an even or odd number of excited qubits. A parity meter must discern the two qubit-excitation parities with high fidelity while preserving coherence between same-parity states. Despite numerous proposals for atomic, semiconducting and superconducting qubits, realizing a parity meter that creates entanglement for both even and odd measurement results has remained an outstanding challenge. Here we perform a time-resolved, continuous parity measurement of two superconducting qubits using the cavity in a three-dimensional circuit quantum electrodynamics architecture and phase-sensitive parametric amplification. Using postselection, we produce entanglement by parity measurement reaching 88 per cent fidelity to the closest Bell state. Incorporating the parity meter in a feedback-control loop, we transform the entanglement generation from probabilistic to fully deterministic, achieving 66 per cent fidelity to a target Bell state on demand. These realizations of a parity meter and a feedback-enabled deterministic measurement protocol provide key ingredients for active quantum error correction in the solid state.

  5. The association between parity and birthweight in a longitudinal consecutive pregnancy cohort.

    PubMed

    Hinkle, Stefanie N; Albert, Paul S; Mendola, Pauline; Sjaarda, Lindsey A; Yeung, Edwina; Boghossian, Nansi S; Laughon, S Katherine

    2014-03-01

    Nulliparity is associated with lower birthweight, but few studies have examined how within-mother changes in risk factors impact this association. We used longitudinal electronic medical record data from a hospital-based cohort of consecutive singleton live births from 2002-2010 in Utah. To reduce bias from unobserved pregnancies, primary analyses were limited to 9484 women who entered nulliparous from 2002-2004, with 23,380 pregnancies up to parity 3. Unrestricted secondary analyses used 101,225 pregnancies from 45,212 women with pregnancies up to parity 7. We calculated gestational age and sex-specific birthweight z-scores with nulliparas as the reference. Using linear mixed models, we estimated birthweight z-score by parity adjusting for pregnancy-specific sociodemographics, smoking, alcohol, prepregnancy body mass index, gestational weight gain, and medical conditions. Compared with nulliparas', infants of primiparas were larger by 0.20 unadjusted z-score units [95% confidence interval (CI) 0.18, 0.22]; the adjusted increase was similar at 0.18 z-score units [95% CI 0.15, 0.20]. Birthweight continued to increase up to parity 3, but with a smaller difference (parity 3 vs. 0 β = 0.27 [95% CI 0.20, 0.34]). In the unrestricted secondary sample, there was significant departure in linearity from parity 1 to 7 (P < 0.001); birthweight increased only up to parity 4 (parity 4 vs. 0 β = 0.34 [95% CI 0.31, 0.37]). The association between parity and birthweight was non-linear with the greatest increase observed between first- and second-born infants of the same mother. Adjustment for changes in weight or chronic diseases did not change the relationship between parity and birthweight. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  6. Influence of parity on bone mineral density and peripheral fracture risk in Moroccan postmenopausal women.

    PubMed

    Allali, Fadoua; Maaroufi, Houda; Aichaoui, Siham El; Khazani, Hamza; Saoud, Bouchra; Benyahya, Boubker; Abouqal, Redouane; Hajjaj-Hassouni, Najia

    2007-08-20

    The aims of the study were to determine: (1) the relationship between parity and bone mineral density (BMD); (2) the relationship between parity and osteoporotic peripheral fractures. The group studied included 730 postmenopausal women. Patients were separated into four groups according to the number of fullterm pregnancies, group 1: nulliparae, group 2: one to three pregnancies, group 3: four to five pregnancies, and group 4: six and more pregnancies. Additionally, patients were separated into three groups according to their ages, as <50 years, 50-59 years and >or=60 years. The median parity was 4 [0-20]. All the patients with parity greater than six had spine and hip BMD values significantly lower than values in the other groups (p<0.001). After adjustment for age and body mass index (BMI), decreased lumbar and total hip BMD were still associated to increased parity (analysis of covariance (ANCOVA), p=0.04 and 0.023, respectively). The relation between parity and lumbar BMD was highly significant among women aged <50 years (age-adjusted p=0.022), while there was no parity-spine BMD association in the other age groups. The relation between parity and hip BMD was seen only in the group 50-59 years (age-adjusted p=0.042). A positive history for peripheral fractures was present in 170 (23%) patients. There was relationship between parity and peripheral fractures neither in the whole population nor in the sub-groups according to age. The present study suggests that the BMD of the spine and hip decreases with an increasing number of pregnancies, and this situation shows variations in different age groups. However, there was no correlation between parity level and peripheral fractures.

  7. Sex composition and its impact on future childbearing: a longitudinal study from urban Uttar Pradesh.

    PubMed

    Rajan, Sowmya; Nanda, Priya; Calhoun, Lisa M; Speizer, Ilene S

    2018-02-27

    The sex composition of existing children has been shown to influence childbearing decision-making and behaviors of women and couples. One aspect of this influence is the preference for sons. In India, where son preference is deeply entrenched, research has normally focused on rural areas using cross-sectional data. However, urban areas in India are rapidly changing, with profound implications for childbearing patterns. Yet, evidence on the effect of the sex composition of current children on subsequent childbearing intentions and behavior in urban areas is scant. In this study, we analyze the impact of sex composition of children on subsequent (1) parity progression, (2) contraceptive use, and (3) desire for another child. We analyze prospective data from women over a four year period in urban Uttar Pradesh using discrete-time event history logistic regression models to analyze parity progression from the first to second parity, second to third parity, and third to fourth parity. We also use logistic regression models to analyze contraceptive use and desire for another child. Relative to women with no daughters, women with no sons had significantly higher odds of progressing to the next birth (parity 1 - aOR: 1.31; CI: 1.04-1.66; parity 2 - aOR: 4.65; CI: 3.11-6.93; parity 3 - aOR:3.45; CI: 1.83-6.52), as well as reduced odds of using contraception (parity 2 - aOR:.58; CI: .44-.76; parity 3 - aOR: .58; CI: .35-.98). Relative to women with two or more sons, women with two or more daughters had significantly higher odds of wanting to have another child (parity 1 - aOR: 1.33; CI: 1.06-1.67; parity 2 - aOR: 3.96; CI: 2.45-6.41; parity 3-4.89; CI: 2.22-10.77). Our study demonstrates the pervasiveness of son preference in urban areas of Uttar Pradesh. We discuss these findings for future programmatic strategies to mitigate son preference in urban settings.

  8. Genetic improvement of total milk yield and total lactation persistency of the first three lactations in dairy cattle.

    PubMed

    Togashi, K; Lin, C Y

    2008-07-01

    The objective of this study was to compare 6 selection criteria in terms of 3-parity total milk yield and 9 selection criteria in terms of total net merit (H) comprising 3-parity total milk yield and total lactation persistency. The 6 selection criteria compared were as follows: first-parity milk estimated breeding value (EBV; M1), first 2-parity milk EBV (M2), first 3-parity milk EBV (M3), first-parity eigen index (EI(1)), first 2-parity eigen index (EI(2)), and first 3-parity eigen index (EI(3)). The 9 selection criteria compared in terms of H were M1, M2, M3, EI(1), EI(2), EI(3), and first-parity, first 2-parity, and first 3-parity selection indices (I(1), I(2), and I(3), respectively). In terms of total milk yield, selection on M3 or EI(3) achieved the greatest genetic response, whereas selection on EI(1) produced the largest genetic progress per day. In terms of total net merit, selection on I(3) brought the largest response, whereas selection EI(1) yielded the greatest genetic progress per day. A multiple-lactation random regression test-day model simultaneously yields the EBV of the 3 lactations for all animals included in the analysis even though the younger animals do not have the opportunity to complete the first 3 lactations. It is important to use the first 3 lactation EBV for selection decision rather than only the first lactation EBV in spite of the fact that the first-parity selection criteria achieved a faster genetic progress per day than the 3-parity selection criteria. Under a multiple-lactation random regression animal model analysis, the use of the first 3 lactation EBV for selection decision does not prolong the generation interval as compared with the use of only the first lactation EBV. Thus, it is justified to compare genetic response on a lifetime basis rather than on a per-day basis. The results suggest the use of M3 or EI(3) for genetic improvement of total milk yield and the use of I(3) for genetic improvement of total net merit H. Although this study deals with selection for 3-parity milk production, the same principle applies to selection for lifetime milk production.

  9. Antenatal care and morbidity profile of pregnant women in an urban resettlement colony of Delhi, India.

    PubMed

    Rustagi, N; Prasuna, J G; Vibha, M D

    2011-06-01

    The burden of antenatal morbidities and health care services utilization during antenatal period serve an important role in defining service needs and to assess reproductive health status of women. To evaluate the burden of antenatal morbidities in women and to assess the health care utilization by study subjects during antenatal period. A community based follow up study was carried out in an urban resettlement colony of Delhi. All pregnant women in the study area were enrolled and followed for two more visits to collect information about morbidities suffered and health care services utilized during pregnancy. Appropriate tests of significance were applied. Of 358 women enrolled, three hundred could be followed for two more visits. Majority of women (80.3%) suffered one or more morbidities during their current pregnancy but overall care sought for illness during pregnancy was poor. Visits for routine preventive check up was made by most of women (95% and above) but recommended three antenatal visits was significantly low among women of age more than thirty (OR=16.6; 2.2-125.9), of lower middle socio economic status (OR=2.84; 1.16-6.93) and parity three or more (OR=4.37; 1.07-17.83). Women with education status of high school and above had significantly lower odd ratio (OR=0.33; 0.11-0.99) for having less than three antenatal visits. Care sought for antenatal morbidities is still poor among women of urban resettlement colonies and age, parity and education of women has a significant bearing on antenatal visits.

  10. C2x: A tool for visualisation and input preparation for CASTEP and other electronic structure codes

    NASA Astrophysics Data System (ADS)

    Rutter, M. J.

    2018-04-01

    The c2x code fills two distinct roles. Its first role is in acting as a converter between the binary format .check files from the widely-used CASTEP [1] electronic structure code and various visualisation programs. Its second role is to manipulate and analyse the input and output files from a variety of electronic structure codes, including CASTEP, ONETEP and VASP, as well as the widely-used 'Gaussian cube' file format. Analysis includes symmetry analysis, and manipulation arbitrary cell transformations. It continues to be under development, with growing functionality, and is written in a form which would make it easy to extend it to working directly with files from other electronic structure codes. Data which c2x is capable of extracting from CASTEP's binary checkpoint files include charge densities, spin densities, wavefunctions, relaxed atomic positions, forces, the Fermi level, the total energy, and symmetry operations. It can recreate .cell input files from checkpoint files. Volumetric data can be output in formats useable by many common visualisation programs, and c2x will itself calculate integrals, expand data into supercells, and interpolate data via combinations of Fourier and trilinear interpolation. It can extract data along arbitrary lines (such as lines between atoms) as 1D output. C2x is able to convert between several common formats for describing molecules and crystals, including the .cell format of CASTEP. It can construct supercells, reduce cells to their primitive form, and add specified k-point meshes. It uses the spglib library [2] to report symmetry information, which it can add to .cell files. C2x is a command-line utility, so is readily included in scripts. It is available under the GPL and can be obtained from http://www.c2x.org.uk. It is believed to be the only open-source code which can read CASTEP's .check files, so it will have utility in other projects.

  11. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    NASA Technical Reports Server (NTRS)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  12. 7 CFR 5.5 - Publication of season average, calendar year, and parity price data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... cases where preliminary marketing season average price data are used in estimating the adjusted base... parity price data. 5.5 Section 5.5 Agriculture Office of the Secretary of Agriculture DETERMINATION OF PARITY PRICES § 5.5 Publication of season average, calendar year, and parity price data. (a) New adjusted...

  13. Judging Children's Participatory Parity from Social Justice and the Political Ethics of Care Perspectives

    ERIC Educational Resources Information Center

    Bozalek, Vivienne

    2011-01-01

    This article proposes a model for judging children's participatory parity in different social spaces. The notion of participatory parity originates in Nancy Fraser's normative theory for social justice, where it concerns the participatory status of adults. What, then, constitutes participatory parity for children? How should we judge the extent to…

  14. Long-lived stop at the LHC with or without R-parity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covi, L.; Dradi, F., E-mail: laura.covi@theorie.physik.uni-goettingen.de, E-mail: federico.dradi@theorie.physik.uni-goettingen.de

    2014-10-01

    We consider scenarios of gravitino LSP and DM with stop NLSP both within R-parity conserving and R-parity violating supersymmetry (RPC and RPV SUSY, respectively). We discuss cosmological bounds from Big Bang Nucleosynthesis (BBN) and the gravitino abundance and then concentrate on the signals of long-lived stops at the LHC as displaced vertices or metastable particles. Finally we discuss how to distinguish R-parity conserving and R-parity breaking stop decays if they happen within the detector and how to suppress SM backgrounds.

  15. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  16. A hydrodynamic approach to cosmology - Methodology

    NASA Technical Reports Server (NTRS)

    Cen, Renyue

    1992-01-01

    The present study describes an accurate and efficient hydrodynamic code for evolving self-gravitating cosmological systems. The hydrodynamic code is a flux-based mesh code originally designed for engineering hydrodynamical applications. A variety of checks were performed which indicate that the resolution of the code is a few cells, providing accuracy for integral energy quantities in the present simulations of 1-3 percent over the whole runs. Six species (H I, H II, He I, He II, He III) are tracked separately, and relevant ionization and recombination processes, as well as line and continuum heating and cooling, are computed. The background radiation field is simultaneously determined in the range 1 eV to 100 keV, allowing for absorption, emission, and cosmological effects. It is shown how the inevitable numerical inaccuracies can be estimated and to some extent overcome.

  17. Experiences with Cray multi-tasking

    NASA Technical Reports Server (NTRS)

    Miya, E. N.

    1985-01-01

    The issues involved in modifying an existing code for multitasking is explored. They include Cray extensions to FORTRAN, an examination of the application code under study, designing workable modifications, specific code modifications to the VAX and Cray versions, performance, and efficiency results. The finished product is a faster, fully synchronous, parallel version of the original program. A production program is partitioned by hand to run on two CPUs. Loop splitting multitasks three key subroutines. Simply dividing subroutine data and control structure down the middle of a subroutine is not safe. Simple division produces results that are inconsistent with uniprocessor runs. The safest way to partition the code is to transfer one block of loops at a time and check the results of each on a test case. Other issues include debugging and performance. Task startup and maintenance (e.g., synchronization) are potentially expensive.

  18. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  19. Genetic and economic analyses of female replacement rates in the dam-daughter pathway of a hierarchical swine breeding structure.

    PubMed

    Faust, M A; Robison, O W; Tess, M W

    1992-07-01

    A stochastic life-cycle swine production model was used to study the effect of female replacement rates in the dam-daughter pathway for a tiered breeding structure on genetic change and returns to the breeder. Genetic, environmental, and economic parameters were used to simulate characteristics of individual pigs in a system producing F1 female replacements. Evaluated were maximum culling ages for nucleus and multiplier tier sows. System combinations included one- and five-parity alternatives for both levels and 10-parity options for the multiplier tier. Yearly changes and average phenotypic levels were computed for performance and economic measures. Generally, at the nucleus level, responses to 10 yr of selection for sow and pig performance in five-parity herds were 70 to 85% of response in one-parity herds. Similarly, the highest selection responses in multiplier herds were from systems with one-parity nucleus tiers. Responses in these were typically greater than 115% of the response for systems with the smallest yearly change, namely, the five-parity nucleus and five- and 10-parity multiplier levels. In contrast, the most profitable multiplier tiers (10-parity) had the lowest replacement costs. Within a multiplier culling strategy, rapid genetic change was desirable. Differences between systems that culled after five or 10 parities were smaller than differences between five- and one-parity multiplier options. To recover production costs, systems with the lowest returns required 140% of market hog value for gilts available to commercial tiers, whereas more economically efficient systems required no premium.

  20. Impact of comprehensive insurance parity on follow-up care after psychiatric inpatient treatment in Oregon.

    PubMed

    Wallace, Neal T; McConnell, K John

    2013-10-01

    This study assessed the impact of Oregon's 2007 parity law, which required behavioral health insurance parity, on rates of follow-up care provided within 30 days of psychiatric inpatient care. Data sources were claims (2005-2008) for 737 individuals with inpatient stays for a mental disorder who were continuously enrolled in insurance plans affected by the parity law (intervention group) or in commercial, self-insured plans that were not affected by the law (control group). A difference-in-difference analysis was used to compare rates of follow-up care before and after the parity law between discharges of individuals in the intervention group and the control group and between discharges of individuals in the intervention group who had or had not met preparity quantitative coverage limits during a coverage year. Estimates of the marginal effects of the parity law were adjusted for gender, discharge diagnosis, relationship to policy holder, and calendar quarter of discharge. The study included 353 discharges in the intervention group and 535 discharges in the control group. After the parity law, follow-up rates increased by 11% (p=.042) overall and by 20% for discharges of individuals who had met coverage limits (p=.028). The Oregon parity law was associated with a large increase in the rate of follow-up care, predominantly for discharges of individuals who had met preparity quantitative coverage limits. Given similarities between the law and the 2008 Mental Health Parity and Addiction Equity Act, the results may portend a national effect of more comprehensive parity laws.

  1. 7 CFR 5.1 - Parity index and index of prices received by farmers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Parity index and index of prices received by farmers... § 5.1 Parity index and index of prices received by farmers. (a) The parity index and related indices... farmers, interest, taxes, and farm wage rates, as revised May 1976 and published in the May 28, 1976, and...

  2. First measurement of coherent ϕ -meson photoproduction from 4He near threshold

    NASA Astrophysics Data System (ADS)

    Hiraiwa, T.; Yosoi, M.; Niiyama, M.; Morino, Y.; Nakatsugawa, Y.; Sumihama, M.; Ahn, D. S.; Ahn, J. K.; Chang, W. C.; Chen, J. Y.; Daté, S.; Fujimura, H.; Fukui, S.; Hicks, K.; Hotta, T.; Hwang, S. H.; Ishikawa, T.; Kato, Y.; Kawai, H.; Kohri, H.; Kon, Y.; Lin, P. J.; Maeda, Y.; Miyabe, M.; Mizutani, K.; Muramatsu, N.; Nakano, T.; Nozawa, Y.; Ohashi, Y.; Ohta, T.; Oka, M.; Rangacharyulu, C.; Ryu, S. Y.; Saito, T.; Sawada, T.; Shimizu, H.; Strokovsky, E. A.; Sugaya, Y.; Suzuki, K.; Tokiyasu, A. O.; Tomioka, T.; Tsunemi, T.; Uchida, M.; Yorita, T.; LEPS Collaboration

    2018-03-01

    The differential cross sections and decay angular distributions for coherent ϕ -meson photoproduction from helium-4 are measured for the first time at forward angles with linearly polarized photons in the energy range Eγ=1.685 -2.385 GeV . Thanks to the target with spin-parity JP=0+ , unnatural-parity exchanges are absent, and thus natural-parity exchanges can be investigated clearly. The decay asymmetry with respect to photon polarization is shown to be very close to the maximal value. This ensures the strong dominance (>94 %) of natural-parity exchanges in this reaction. To evaluate the contribution from natural-parity exchanges to the forward cross section (θ =0∘ ) for the γ p →ϕ p reaction near threshold, the energy dependence of the forward cross section (θ =0∘ ) for the γ 4He →ϕ 4He reaction is analyzed. The comparison to γ p →ϕ p data suggests that enhancement of the forward cross section arising from natural-parity exchanges and/or destructive interference between natural-parity and unnatural-parity exchanges is needed in the γ p →ϕ p reaction near threshold.

  3. Federal parity law associated with increased probability of using out-of-network substance use disorder treatment services.

    PubMed

    McGinty, Emma E; Busch, Susan H; Stuart, Elizabeth A; Huskamp, Haiden A; Gibson, Teresa B; Goldman, Howard H; Barry, Colleen L

    2015-08-01

    The Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008 requires commercial insurers providing group coverage for substance use disorder services to offer benefits for those services at a level equal to those for medical or surgical benefits. Unlike previous parity policies instituted for federal employees and in individual states, the law extends parity to out-of-network services. We conducted an interrupted time-series analysis using insurance claims from large self-insured employers to evaluate whether federal parity was associated with changes in out-of-network treatment for 525,620 users of substance use disorder services. Federal parity was associated with an increased probability of using out-of-network services, an increased average number of out-of-network outpatient visits, and increased average total spending on out-of-network services among users of those services. Our findings were broadly consistent with the contention of federal parity proponents that extending parity to out-of-network services would broaden access to substance use disorder care obtained outside of plan networks. Project HOPE—The People-to-People Health Foundation, Inc.

  4. Unidirectional Wave Vector Manipulation in Two-Dimensional Space with an All Passive Acoustic Parity-Time-Symmetric Metamaterials Crystal

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie

    2018-03-01

    Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.

  5. Residual Effects on Students of a College Poverty Immersion Experience

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Markham, Ruth Lowrie; Stultz, Kurt J.; Johnson, Heidi J.; Garland, Elizabeth P.

    2016-01-01

    The authors report the results of a phenomenological, qualitative research study involving 20 students who participated in a weekend poverty immersion experience. Analysis of the tape-recorded interviews included coding, checks for internal validity, and the generation of themes common to most of the research participants. Two overall results were…

  6. Decoupling erasure coding from massive multiplayer online role-playing games in model checking

    NASA Astrophysics Data System (ADS)

    Liu, Linhui; Li, Wei

    2009-07-01

    SMPs must work. Given the current status of unstable configurations, systems engineers predictably desire the emulation of RAID. in order to surmount this problem, we verify not only that IPv7 and symmetric encryption can cooperate to overcome this problem, but that the same is true for web browsers.

  7. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    NASA Technical Reports Server (NTRS)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  8. Achieving Mental Health and Substance Use Disorder Treatment Parity: A Quarter Century of Policy Making and Research.

    PubMed

    Peterson, Emma; Busch, Susan

    2018-04-01

    The Mental Health Parity and Addiction Equity Act (MHPAEA) of 2008 changed the landscape of mental health and substance use disorder coverage in the United States. The MHPAEA's comprehensiveness compared with past parity laws, including its extension of parity to plan management strategies, the so-called nonquantitative treatment limitations (NQTL), led to significant improvements in mental health care coverage. In this article, we review the history of this landmark legislation and its recent expansions to new populations, describe past research on the effects of this and other mental health/substance use disorder parity laws, and describe some directions for future research, including NQTL compliance issues, effects of parity on individuals with severe mental illness, and measurement of benefits other than mental health care use.

  9. Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution

    NASA Astrophysics Data System (ADS)

    Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin

    2018-04-01

    The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.

  10. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classesmore » is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.« less

  11. Stress analysis and evaluation of a rectangular pressure vessel

    NASA Astrophysics Data System (ADS)

    Rezvani, M. A.; Ziada, H. H.; Shurrab, M. S.

    1992-10-01

    This study addresses structural analysis and evaluation of an abnormal rectangular pressure vessel, designed to house equipment for drilling and collecting samples from Hanford radioactive waste storage tanks. It had to be qualified according to ASME boiler and pressure vessel code, section 8; however, it had the cover plate bolted along the long face, a configuration not addressed by the code. Finite element method was used to calculate stresses resulting from internal pressure; these stresses were then used to evaluate and qualify the vessel. Fatigue is not a concern; thus, it can be built according to section 8, division 1 instead of division 2. Stress analysis was checked against the code. A stayed plate was added to stiffen the long side of the vessel.

  12. RAINIER: A simulation tool for distributions of excited nuclear states and cascade fluctuations

    DOE PAGES

    Kirsch, L. E.; Bernstein, L. A.

    2018-03-04

    In this paper, a new code has been developed named RAINIER that simulates the γ-ray decay of discrete and quasi-continuum nuclear levels for a user-specified range of energy, angular momentum, and parity including a realistic treatment of level spacing and transition width fluctuations. A similar program, DICEBOX, uses the Monte Carlo method to simulate level and width fluctuations but is restricted in its initial level population algorithm. On the other hand, modern reaction codes such as TALYS and EMPIRE populate a wide range of states in the residual nucleus prior to γ-ray decay, but do not go beyond the usemore » of deterministic functions and therefore neglect cascade fluctuations. This combination of capabilities allows RAINIER to be used to determine quasi-continuum properties through comparison with experimental data. Finally, several examples are given that demonstrate how cascade fluctuations influence experimental high-resolution γ-ray spectra from reactions that populate a wide range of initial states.« less

  13. RAINIER: A simulation tool for distributions of excited nuclear states and cascade fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirsch, L. E.; Bernstein, L. A.

    In this paper, a new code has been developed named RAINIER that simulates the γ-ray decay of discrete and quasi-continuum nuclear levels for a user-specified range of energy, angular momentum, and parity including a realistic treatment of level spacing and transition width fluctuations. A similar program, DICEBOX, uses the Monte Carlo method to simulate level and width fluctuations but is restricted in its initial level population algorithm. On the other hand, modern reaction codes such as TALYS and EMPIRE populate a wide range of states in the residual nucleus prior to γ-ray decay, but do not go beyond the usemore » of deterministic functions and therefore neglect cascade fluctuations. This combination of capabilities allows RAINIER to be used to determine quasi-continuum properties through comparison with experimental data. Finally, several examples are given that demonstrate how cascade fluctuations influence experimental high-resolution γ-ray spectra from reactions that populate a wide range of initial states.« less

  14. Relationship between parity and bone mass in postmenopausal women according to number of parities and age.

    PubMed

    Heidari, Behzad; Heidari, Parnaz; Nourooddini, Haj Ghorban; Hajian-Tilaki, Karim Ollah

    2013-01-01

    To investigate the impact of multiple pregnancies on postmenopausal bone mineral density (BMD). BMD at the femoral neck (FN) and lumbar spine (LS) was measured by dual energy X-ray absorptiometry (DXA) method. Diagnosis of osteoporosis (OP) was confirmed by World Health Organization criteria. Women were stratified according to number of parity as < 3, 4-7, and > 7 parity groups as well as in age groups of < 65 and 65 in age groups of < 65 and > or = 65 years. BMD values and frequency of OP were compared across the groups according to age. Multiple logistic regression analysis with calculation of adjusted odds ratio (OR) was used for association. A total of 264 women with mean age of 63 +/- 8.7 and mean menopausal duration of 15.8 +/- 10.2 years were studied. LS-OP and FN-OP were observed in 28% and 58.3% of women, respectively. There were significant differences in BMD values across different parity groups at both sites of LS and FN (p = 0.011 and p = 0.036, respectively). Parity 4-7 (vs. < or = 3) increased BMD nonsignificantly, but > 7 significantly decreased LS-BMD and FN-BMD as compared with 0-7 parity (p = 0.006 and p = 0.009, respectively). Parity > 7 increased the risk of LS-OP by OR = 1.81 (95% CI 1.03-3.1, p = 0.037) and FN-OP by OR = 1.67 (95% CI 0.97-2.8, p = 0.063). In addition, women with high parity had lower BMD decline at LS and FN by age (> or = 65 vs. < 65 years) by 1.3% (p = 0.77) and -10.1% (p = 0.009) as compared with 0-7 parity group by -9.5% (p = 0.001) and -15% (p = 0.0001), respectively. Parity > 7 is associated with spinal trabecular bone loss in younger postmenopausal women as well as an osteoprotective effect against age-related bone loss, which counteracts the early negative effect. Therefore, parity should not be considered as a risk factor for postmenopausal osteoporosis.

  15. Grid parity analysis of stand-alone hybrid microgrids: A comparative study of Germany, Pakistan, South Africa and the United States

    NASA Astrophysics Data System (ADS)

    Siddiqui, Jawad M.

    Grid parity for alternative energy resources occurs when the cost of electricity generated from the source is lower than or equal to the purchasing price of power from the electricity grid. This thesis aims to quantitatively analyze the evolution of hybrid stand-alone microgrids in the US, Germany, Pakistan and South Africa to determine grid parity for a solar PV/Diesel/Battery hybrid system. The Energy System Model (ESM) and NREL's Hybrid Optimization of Multiple Energy Resources (HOMER) software are used to simulate the microgrid operation and determine a Levelized Cost of Electricity (LCOE) figure for each location. This cost per kWh is then compared with two distinct estimates of future retail electricity prices at each location to determine grid parity points. Analysis results reveal that future estimates of LCOE for such hybrid stand-alone microgrids range within the 35-55 cents/kWh over the 25 year study period. Grid parity occurs earlier in locations with higher power prices or unreliable grids. For Pakistan grid parity is already here, while Germany hits parity between the years 2023-2029. Results for South Africa suggest a parity time range of the years 2040-2045. In the US, places with low grid prices do not hit parity during the study period. Sensitivity analysis results reveal the significant impact of financing and the cost of capital on these grid parity points, particularly in developing markets of Pakistan and South Africa. Overall, the study helps conclude that variations in energy markets may determine the fate of emerging energy technologies like microgrids. However, policy interventions have a significant impact on the final outcome, such as the grid parity in this case. Measures such as eliminating uncertainty in policies and improving financing can help these grids overcome barriers in developing economies, where they may find a greater use much earlier in time.

  16. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation

    PubMed Central

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David

    2017-01-01

    Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343

  17. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation.

    PubMed

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong

    2017-02-17

    As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.

  18. Counter-Check of 4,937 WDS Objects for Being Physical Double Stars

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried; Bryant, T. V.

    2018-04-01

    The WDS catalog contains (as of August 2017) more than 20,000 V-coded objects which are considered to be physical pairs because of their common proper motion (CPM) or other attributes. For 4,937 of these objects both components were identified in the UCAC5 catalog and counter-checked with UCAC5 proper motion data using a CPM assessment scheme according to Knapp and Nanson 2017. A surprisingly large number of these pairs seem to be optical rather than physical. Additionally GAIA DR1 positions are given for all components, and precise separation and position angle based on GAIA DR1 coordinates were calculated for all of the 4,937 pair.

  19. Verifying Architectural Design Rules of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  20. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

Top